00:00:00.000 Started by upstream project "autotest-per-patch" build number 130561 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.043 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:05.960 The recommended git tool is: git 00:00:05.960 using credential 00000000-0000-0000-0000-000000000002 00:00:05.963 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:05.975 Fetching changes from the remote Git repository 00:00:05.977 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:05.991 Using shallow fetch with depth 1 00:00:05.991 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:05.991 > git --version # timeout=10 00:00:06.002 > git --version # 'git version 2.39.2' 00:00:06.002 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:06.016 Setting http proxy: proxy-dmz.intel.com:911 00:00:06.016 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:40.355 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:40.369 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:40.381 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:40.381 > git config core.sparsecheckout # timeout=10 00:00:40.392 > git read-tree -mu HEAD # timeout=10 00:00:40.408 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:40.426 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:40.426 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:40.503 [Pipeline] Start of Pipeline 00:00:40.516 [Pipeline] library 00:00:40.518 Loading library shm_lib@master 00:00:40.518 Library shm_lib@master is cached. Copying from home. 00:00:40.535 [Pipeline] node 00:00:40.541 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.543 [Pipeline] { 00:00:40.551 [Pipeline] catchError 00:00:40.552 [Pipeline] { 00:00:40.561 [Pipeline] wrap 00:00:40.567 [Pipeline] { 00:00:40.572 [Pipeline] stage 00:00:40.573 [Pipeline] { (Prologue) 00:00:40.737 [Pipeline] sh 00:00:41.019 + logger -p user.info -t JENKINS-CI 00:00:41.038 [Pipeline] echo 00:00:41.040 Node: WFP6 00:00:41.049 [Pipeline] sh 00:00:41.350 [Pipeline] setCustomBuildProperty 00:00:41.364 [Pipeline] echo 00:00:41.366 Cleanup processes 00:00:41.371 [Pipeline] sh 00:00:41.656 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.656 2156089 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.675 [Pipeline] sh 00:00:41.970 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.970 ++ grep -v 'sudo pgrep' 00:00:41.970 ++ awk '{print $1}' 00:00:41.970 + sudo kill -9 00:00:41.970 + true 00:00:41.983 [Pipeline] cleanWs 00:00:41.992 [WS-CLEANUP] Deleting project workspace... 00:00:41.992 [WS-CLEANUP] Deferred wipeout is used... 00:00:41.998 [WS-CLEANUP] done 00:00:42.002 [Pipeline] setCustomBuildProperty 00:00:42.015 [Pipeline] sh 00:00:42.297 + sudo git config --global --replace-all safe.directory '*' 00:00:42.401 [Pipeline] httpRequest 00:00:42.894 [Pipeline] echo 00:00:42.896 Sorcerer 10.211.164.101 is alive 00:00:42.906 [Pipeline] retry 00:00:42.909 [Pipeline] { 00:00:42.922 [Pipeline] httpRequest 00:00:42.927 HttpMethod: GET 00:00:42.927 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:42.928 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:42.934 Response Code: HTTP/1.1 200 OK 00:00:42.934 Success: Status code 200 is in the accepted range: 200,404 00:00:42.934 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:57.847 [Pipeline] } 00:00:57.863 [Pipeline] // retry 00:00:57.870 [Pipeline] sh 00:00:58.155 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:58.167 [Pipeline] httpRequest 00:00:58.726 [Pipeline] echo 00:00:58.728 Sorcerer 10.211.164.101 is alive 00:00:58.738 [Pipeline] retry 00:00:58.740 [Pipeline] { 00:00:58.754 [Pipeline] httpRequest 00:00:58.760 HttpMethod: GET 00:00:58.760 URL: http://10.211.164.101/packages/spdk_3a41ae5b34e38019cce706608dfbf6d94ba99d76.tar.gz 00:00:58.761 Sending request to url: http://10.211.164.101/packages/spdk_3a41ae5b34e38019cce706608dfbf6d94ba99d76.tar.gz 00:00:58.769 Response Code: HTTP/1.1 200 OK 00:00:58.769 Success: Status code 200 is in the accepted range: 200,404 00:00:58.770 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3a41ae5b34e38019cce706608dfbf6d94ba99d76.tar.gz 00:02:09.258 [Pipeline] } 00:02:09.275 [Pipeline] // retry 00:02:09.283 [Pipeline] sh 00:02:09.572 + tar --no-same-owner -xf spdk_3a41ae5b34e38019cce706608dfbf6d94ba99d76.tar.gz 00:02:12.122 [Pipeline] sh 00:02:12.408 + git -C spdk log --oneline -n5 00:02:12.408 3a41ae5b3 bdev/nvme: controller failover/multipath doc change 00:02:12.408 7b38c9ede bdev/nvme: changed default config to multipath 00:02:12.408 fefe29c8c bdev/nvme: ctrl config consistency check 00:02:12.408 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:02:12.408 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:02:12.420 [Pipeline] } 00:02:12.435 [Pipeline] // stage 00:02:12.445 [Pipeline] stage 00:02:12.447 [Pipeline] { (Prepare) 00:02:12.467 [Pipeline] writeFile 00:02:12.484 [Pipeline] sh 00:02:12.769 + logger -p user.info -t JENKINS-CI 00:02:12.782 [Pipeline] sh 00:02:13.070 + logger -p user.info -t JENKINS-CI 00:02:13.083 [Pipeline] sh 00:02:13.368 + cat autorun-spdk.conf 00:02:13.368 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.368 SPDK_TEST_NVMF=1 00:02:13.368 SPDK_TEST_NVME_CLI=1 00:02:13.368 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.368 SPDK_TEST_NVMF_NICS=e810 00:02:13.368 SPDK_TEST_VFIOUSER=1 00:02:13.368 SPDK_RUN_UBSAN=1 00:02:13.368 NET_TYPE=phy 00:02:13.376 RUN_NIGHTLY=0 00:02:13.381 [Pipeline] readFile 00:02:13.407 [Pipeline] withEnv 00:02:13.410 [Pipeline] { 00:02:13.424 [Pipeline] sh 00:02:13.712 + set -ex 00:02:13.712 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:13.712 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:13.712 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.712 ++ SPDK_TEST_NVMF=1 00:02:13.712 ++ SPDK_TEST_NVME_CLI=1 00:02:13.712 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.712 ++ SPDK_TEST_NVMF_NICS=e810 00:02:13.712 ++ SPDK_TEST_VFIOUSER=1 00:02:13.712 ++ SPDK_RUN_UBSAN=1 00:02:13.713 ++ NET_TYPE=phy 00:02:13.713 ++ RUN_NIGHTLY=0 00:02:13.713 + case $SPDK_TEST_NVMF_NICS in 00:02:13.713 + DRIVERS=ice 00:02:13.713 + [[ tcp == \r\d\m\a ]] 00:02:13.713 + [[ -n ice ]] 00:02:13.713 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:13.713 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:13.713 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:13.713 rmmod: ERROR: Module irdma is not currently loaded 00:02:13.713 rmmod: ERROR: Module i40iw is not currently loaded 00:02:13.713 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:13.713 + true 00:02:13.713 + for D in $DRIVERS 00:02:13.713 + sudo modprobe ice 00:02:13.713 + exit 0 00:02:13.722 [Pipeline] } 00:02:13.737 [Pipeline] // withEnv 00:02:13.742 [Pipeline] } 00:02:13.755 [Pipeline] // stage 00:02:13.767 [Pipeline] catchError 00:02:13.769 [Pipeline] { 00:02:13.783 [Pipeline] timeout 00:02:13.783 Timeout set to expire in 1 hr 0 min 00:02:13.785 [Pipeline] { 00:02:13.800 [Pipeline] stage 00:02:13.802 [Pipeline] { (Tests) 00:02:13.818 [Pipeline] sh 00:02:14.105 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.105 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.105 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.105 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:14.105 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.105 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:14.105 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:14.105 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:14.105 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:14.105 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:14.105 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:14.105 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.105 + source /etc/os-release 00:02:14.105 ++ NAME='Fedora Linux' 00:02:14.105 ++ VERSION='39 (Cloud Edition)' 00:02:14.105 ++ ID=fedora 00:02:14.105 ++ VERSION_ID=39 00:02:14.105 ++ VERSION_CODENAME= 00:02:14.105 ++ PLATFORM_ID=platform:f39 00:02:14.105 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:14.105 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:14.105 ++ LOGO=fedora-logo-icon 00:02:14.105 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:14.105 ++ HOME_URL=https://fedoraproject.org/ 00:02:14.105 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:14.105 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:14.105 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:14.105 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:14.105 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:14.105 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:14.105 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:14.105 ++ SUPPORT_END=2024-11-12 00:02:14.105 ++ VARIANT='Cloud Edition' 00:02:14.105 ++ VARIANT_ID=cloud 00:02:14.105 + uname -a 00:02:14.105 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:14.105 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:16.646 Hugepages 00:02:16.646 node hugesize free / total 00:02:16.646 node0 1048576kB 0 / 0 00:02:16.646 node0 2048kB 0 / 0 00:02:16.646 node1 1048576kB 0 / 0 00:02:16.646 node1 2048kB 0 / 0 00:02:16.646 00:02:16.646 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:16.646 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:16.646 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:16.646 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:16.646 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:16.646 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:16.646 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:16.646 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:16.646 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:16.646 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:16.646 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:16.646 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:16.646 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:16.646 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:16.646 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:16.646 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:16.646 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:16.646 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:16.646 + rm -f /tmp/spdk-ld-path 00:02:16.646 + source autorun-spdk.conf 00:02:16.646 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.646 ++ SPDK_TEST_NVMF=1 00:02:16.646 ++ SPDK_TEST_NVME_CLI=1 00:02:16.646 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:16.646 ++ SPDK_TEST_NVMF_NICS=e810 00:02:16.646 ++ SPDK_TEST_VFIOUSER=1 00:02:16.646 ++ SPDK_RUN_UBSAN=1 00:02:16.646 ++ NET_TYPE=phy 00:02:16.646 ++ RUN_NIGHTLY=0 00:02:16.646 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:16.646 + [[ -n '' ]] 00:02:16.646 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.646 + for M in /var/spdk/build-*-manifest.txt 00:02:16.646 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:16.646 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:16.646 + for M in /var/spdk/build-*-manifest.txt 00:02:16.646 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:16.646 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:16.646 + for M in /var/spdk/build-*-manifest.txt 00:02:16.646 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:16.646 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:16.646 ++ uname 00:02:16.646 + [[ Linux == \L\i\n\u\x ]] 00:02:16.646 + sudo dmesg -T 00:02:16.906 + sudo dmesg --clear 00:02:16.906 + dmesg_pid=2157524 00:02:16.906 + [[ Fedora Linux == FreeBSD ]] 00:02:16.906 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:16.906 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:16.906 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:16.906 + [[ -x /usr/src/fio-static/fio ]] 00:02:16.906 + export FIO_BIN=/usr/src/fio-static/fio 00:02:16.906 + FIO_BIN=/usr/src/fio-static/fio 00:02:16.906 + sudo dmesg -Tw 00:02:16.906 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:16.906 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:16.906 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:16.906 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:16.906 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:16.906 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:16.906 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:16.906 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:16.906 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:16.906 Test configuration: 00:02:16.906 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.906 SPDK_TEST_NVMF=1 00:02:16.906 SPDK_TEST_NVME_CLI=1 00:02:16.906 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:16.906 SPDK_TEST_NVMF_NICS=e810 00:02:16.906 SPDK_TEST_VFIOUSER=1 00:02:16.906 SPDK_RUN_UBSAN=1 00:02:16.906 NET_TYPE=phy 00:02:16.906 RUN_NIGHTLY=0 15:36:26 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:16.906 15:36:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:16.906 15:36:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:16.906 15:36:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:16.906 15:36:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:16.906 15:36:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:16.906 15:36:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.906 15:36:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.906 15:36:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.906 15:36:26 -- paths/export.sh@5 -- $ export PATH 00:02:16.906 15:36:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.906 15:36:26 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:16.906 15:36:26 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:16.906 15:36:26 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727789786.XXXXXX 00:02:16.906 15:36:27 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727789786.H2TiT9 00:02:16.906 15:36:27 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:16.906 15:36:27 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:02:16.906 15:36:27 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:16.906 15:36:27 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:16.906 15:36:27 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:16.906 15:36:27 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:16.906 15:36:27 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:16.906 15:36:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.906 15:36:27 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:16.906 15:36:27 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:16.906 15:36:27 -- pm/common@17 -- $ local monitor 00:02:16.906 15:36:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.906 15:36:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.906 15:36:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.906 15:36:27 -- pm/common@21 -- $ date +%s 00:02:16.906 15:36:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.906 15:36:27 -- pm/common@21 -- $ date +%s 00:02:16.906 15:36:27 -- pm/common@25 -- $ sleep 1 00:02:16.906 15:36:27 -- pm/common@21 -- $ date +%s 00:02:16.906 15:36:27 -- pm/common@21 -- $ date +%s 00:02:16.906 15:36:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727789787 00:02:16.906 15:36:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727789787 00:02:16.906 15:36:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727789787 00:02:16.906 15:36:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727789787 00:02:16.906 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727789787_collect-cpu-load.pm.log 00:02:16.906 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727789787_collect-vmstat.pm.log 00:02:16.906 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727789787_collect-cpu-temp.pm.log 00:02:16.906 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727789787_collect-bmc-pm.bmc.pm.log 00:02:17.851 15:36:28 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:17.851 15:36:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:17.851 15:36:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:17.851 15:36:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:17.851 15:36:28 -- spdk/autobuild.sh@16 -- $ date -u 00:02:17.851 Tue Oct 1 01:36:28 PM UTC 2024 00:02:17.851 15:36:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:18.111 v25.01-pre-20-g3a41ae5b3 00:02:18.111 15:36:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:18.111 15:36:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:18.111 15:36:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:18.111 15:36:28 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:18.111 15:36:28 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:18.111 15:36:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.111 ************************************ 00:02:18.111 START TEST ubsan 00:02:18.111 ************************************ 00:02:18.111 15:36:28 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:18.111 using ubsan 00:02:18.111 00:02:18.111 real 0m0.000s 00:02:18.111 user 0m0.000s 00:02:18.111 sys 0m0.000s 00:02:18.111 15:36:28 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:18.111 15:36:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:18.111 ************************************ 00:02:18.111 END TEST ubsan 00:02:18.111 ************************************ 00:02:18.111 15:36:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:18.111 15:36:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:18.111 15:36:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:18.111 15:36:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:18.111 15:36:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:18.111 15:36:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:18.111 15:36:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:18.111 15:36:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:18.111 15:36:28 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:18.111 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:18.111 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:18.682 Using 'verbs' RDMA provider 00:02:31.476 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:43.690 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:43.690 Creating mk/config.mk...done. 00:02:43.690 Creating mk/cc.flags.mk...done. 00:02:43.690 Type 'make' to build. 00:02:43.690 15:36:53 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:43.690 15:36:53 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:43.690 15:36:53 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:43.690 15:36:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.690 ************************************ 00:02:43.690 START TEST make 00:02:43.690 ************************************ 00:02:43.690 15:36:53 make -- common/autotest_common.sh@1125 -- $ make -j96 00:02:43.950 make[1]: Nothing to be done for 'all'. 00:02:45.339 The Meson build system 00:02:45.339 Version: 1.5.0 00:02:45.339 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:45.339 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:45.339 Build type: native build 00:02:45.339 Project name: libvfio-user 00:02:45.339 Project version: 0.0.1 00:02:45.339 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:45.339 C linker for the host machine: cc ld.bfd 2.40-14 00:02:45.339 Host machine cpu family: x86_64 00:02:45.339 Host machine cpu: x86_64 00:02:45.339 Run-time dependency threads found: YES 00:02:45.339 Library dl found: YES 00:02:45.339 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:45.339 Run-time dependency json-c found: YES 0.17 00:02:45.339 Run-time dependency cmocka found: YES 1.1.7 00:02:45.339 Program pytest-3 found: NO 00:02:45.339 Program flake8 found: NO 00:02:45.339 Program misspell-fixer found: NO 00:02:45.339 Program restructuredtext-lint found: NO 00:02:45.339 Program valgrind found: YES (/usr/bin/valgrind) 00:02:45.339 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:45.339 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:45.339 Compiler for C supports arguments -Wwrite-strings: YES 00:02:45.339 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:45.339 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:45.339 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:45.339 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:45.339 Build targets in project: 8 00:02:45.339 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:45.340 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:45.340 00:02:45.340 libvfio-user 0.0.1 00:02:45.340 00:02:45.340 User defined options 00:02:45.340 buildtype : debug 00:02:45.340 default_library: shared 00:02:45.340 libdir : /usr/local/lib 00:02:45.340 00:02:45.340 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:45.907 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:45.907 [1/37] Compiling C object samples/null.p/null.c.o 00:02:45.907 [2/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:45.907 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:45.907 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:45.907 [5/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:45.907 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:45.907 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:45.907 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:45.907 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:45.907 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:45.907 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:45.907 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:45.907 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:45.907 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:45.907 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:45.907 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:45.907 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:45.907 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:45.907 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:45.907 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:45.907 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:45.907 [22/37] Compiling C object samples/server.p/server.c.o 00:02:45.907 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:45.907 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:45.907 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:45.907 [26/37] Compiling C object samples/client.p/client.c.o 00:02:45.907 [27/37] Linking target samples/client 00:02:45.907 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:45.907 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:46.166 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:46.166 [31/37] Linking target test/unit_tests 00:02:46.166 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:46.166 [33/37] Linking target samples/shadow_ioeventfd_server 00:02:46.166 [34/37] Linking target samples/gpio-pci-idio-16 00:02:46.166 [35/37] Linking target samples/server 00:02:46.166 [36/37] Linking target samples/null 00:02:46.166 [37/37] Linking target samples/lspci 00:02:46.166 INFO: autodetecting backend as ninja 00:02:46.167 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:46.167 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:46.735 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:46.735 ninja: no work to do. 00:02:52.020 The Meson build system 00:02:52.020 Version: 1.5.0 00:02:52.020 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:52.020 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:52.020 Build type: native build 00:02:52.020 Program cat found: YES (/usr/bin/cat) 00:02:52.020 Project name: DPDK 00:02:52.020 Project version: 24.03.0 00:02:52.020 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:52.020 C linker for the host machine: cc ld.bfd 2.40-14 00:02:52.020 Host machine cpu family: x86_64 00:02:52.020 Host machine cpu: x86_64 00:02:52.020 Message: ## Building in Developer Mode ## 00:02:52.020 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:52.020 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:52.020 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:52.020 Program python3 found: YES (/usr/bin/python3) 00:02:52.020 Program cat found: YES (/usr/bin/cat) 00:02:52.020 Compiler for C supports arguments -march=native: YES 00:02:52.020 Checking for size of "void *" : 8 00:02:52.020 Checking for size of "void *" : 8 (cached) 00:02:52.020 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:52.020 Library m found: YES 00:02:52.020 Library numa found: YES 00:02:52.020 Has header "numaif.h" : YES 00:02:52.020 Library fdt found: NO 00:02:52.020 Library execinfo found: NO 00:02:52.020 Has header "execinfo.h" : YES 00:02:52.020 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:52.020 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:52.020 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:52.020 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:52.020 Run-time dependency openssl found: YES 3.1.1 00:02:52.020 Run-time dependency libpcap found: YES 1.10.4 00:02:52.020 Has header "pcap.h" with dependency libpcap: YES 00:02:52.020 Compiler for C supports arguments -Wcast-qual: YES 00:02:52.020 Compiler for C supports arguments -Wdeprecated: YES 00:02:52.020 Compiler for C supports arguments -Wformat: YES 00:02:52.020 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:52.020 Compiler for C supports arguments -Wformat-security: NO 00:02:52.020 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:52.020 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:52.020 Compiler for C supports arguments -Wnested-externs: YES 00:02:52.020 Compiler for C supports arguments -Wold-style-definition: YES 00:02:52.020 Compiler for C supports arguments -Wpointer-arith: YES 00:02:52.020 Compiler for C supports arguments -Wsign-compare: YES 00:02:52.020 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:52.020 Compiler for C supports arguments -Wundef: YES 00:02:52.020 Compiler for C supports arguments -Wwrite-strings: YES 00:02:52.020 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:52.020 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:52.020 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:52.020 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:52.020 Program objdump found: YES (/usr/bin/objdump) 00:02:52.020 Compiler for C supports arguments -mavx512f: YES 00:02:52.020 Checking if "AVX512 checking" compiles: YES 00:02:52.020 Fetching value of define "__SSE4_2__" : 1 00:02:52.020 Fetching value of define "__AES__" : 1 00:02:52.020 Fetching value of define "__AVX__" : 1 00:02:52.020 Fetching value of define "__AVX2__" : 1 00:02:52.020 Fetching value of define "__AVX512BW__" : 1 00:02:52.020 Fetching value of define "__AVX512CD__" : 1 00:02:52.020 Fetching value of define "__AVX512DQ__" : 1 00:02:52.020 Fetching value of define "__AVX512F__" : 1 00:02:52.021 Fetching value of define "__AVX512VL__" : 1 00:02:52.021 Fetching value of define "__PCLMUL__" : 1 00:02:52.021 Fetching value of define "__RDRND__" : 1 00:02:52.021 Fetching value of define "__RDSEED__" : 1 00:02:52.021 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:52.021 Fetching value of define "__znver1__" : (undefined) 00:02:52.021 Fetching value of define "__znver2__" : (undefined) 00:02:52.021 Fetching value of define "__znver3__" : (undefined) 00:02:52.021 Fetching value of define "__znver4__" : (undefined) 00:02:52.021 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:52.021 Message: lib/log: Defining dependency "log" 00:02:52.021 Message: lib/kvargs: Defining dependency "kvargs" 00:02:52.021 Message: lib/telemetry: Defining dependency "telemetry" 00:02:52.021 Checking for function "getentropy" : NO 00:02:52.021 Message: lib/eal: Defining dependency "eal" 00:02:52.021 Message: lib/ring: Defining dependency "ring" 00:02:52.021 Message: lib/rcu: Defining dependency "rcu" 00:02:52.021 Message: lib/mempool: Defining dependency "mempool" 00:02:52.021 Message: lib/mbuf: Defining dependency "mbuf" 00:02:52.021 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:52.021 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:52.021 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:52.021 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:52.021 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:52.021 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:52.021 Compiler for C supports arguments -mpclmul: YES 00:02:52.021 Compiler for C supports arguments -maes: YES 00:02:52.021 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:52.021 Compiler for C supports arguments -mavx512bw: YES 00:02:52.021 Compiler for C supports arguments -mavx512dq: YES 00:02:52.021 Compiler for C supports arguments -mavx512vl: YES 00:02:52.021 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:52.021 Compiler for C supports arguments -mavx2: YES 00:02:52.021 Compiler for C supports arguments -mavx: YES 00:02:52.021 Message: lib/net: Defining dependency "net" 00:02:52.021 Message: lib/meter: Defining dependency "meter" 00:02:52.021 Message: lib/ethdev: Defining dependency "ethdev" 00:02:52.021 Message: lib/pci: Defining dependency "pci" 00:02:52.021 Message: lib/cmdline: Defining dependency "cmdline" 00:02:52.021 Message: lib/hash: Defining dependency "hash" 00:02:52.021 Message: lib/timer: Defining dependency "timer" 00:02:52.021 Message: lib/compressdev: Defining dependency "compressdev" 00:02:52.021 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:52.021 Message: lib/dmadev: Defining dependency "dmadev" 00:02:52.021 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:52.021 Message: lib/power: Defining dependency "power" 00:02:52.021 Message: lib/reorder: Defining dependency "reorder" 00:02:52.021 Message: lib/security: Defining dependency "security" 00:02:52.021 Has header "linux/userfaultfd.h" : YES 00:02:52.021 Has header "linux/vduse.h" : YES 00:02:52.021 Message: lib/vhost: Defining dependency "vhost" 00:02:52.021 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:52.021 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:52.021 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:52.021 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:52.021 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:52.021 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:52.021 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:52.021 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:52.021 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:52.021 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:52.021 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:52.021 Configuring doxy-api-html.conf using configuration 00:02:52.021 Configuring doxy-api-man.conf using configuration 00:02:52.021 Program mandb found: YES (/usr/bin/mandb) 00:02:52.021 Program sphinx-build found: NO 00:02:52.021 Configuring rte_build_config.h using configuration 00:02:52.021 Message: 00:02:52.021 ================= 00:02:52.021 Applications Enabled 00:02:52.021 ================= 00:02:52.021 00:02:52.021 apps: 00:02:52.021 00:02:52.021 00:02:52.021 Message: 00:02:52.021 ================= 00:02:52.021 Libraries Enabled 00:02:52.021 ================= 00:02:52.021 00:02:52.021 libs: 00:02:52.021 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:52.021 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:52.021 cryptodev, dmadev, power, reorder, security, vhost, 00:02:52.021 00:02:52.021 Message: 00:02:52.021 =============== 00:02:52.021 Drivers Enabled 00:02:52.021 =============== 00:02:52.021 00:02:52.021 common: 00:02:52.021 00:02:52.021 bus: 00:02:52.021 pci, vdev, 00:02:52.021 mempool: 00:02:52.021 ring, 00:02:52.021 dma: 00:02:52.021 00:02:52.021 net: 00:02:52.021 00:02:52.021 crypto: 00:02:52.021 00:02:52.021 compress: 00:02:52.021 00:02:52.021 vdpa: 00:02:52.021 00:02:52.021 00:02:52.021 Message: 00:02:52.021 ================= 00:02:52.021 Content Skipped 00:02:52.021 ================= 00:02:52.021 00:02:52.021 apps: 00:02:52.021 dumpcap: explicitly disabled via build config 00:02:52.021 graph: explicitly disabled via build config 00:02:52.021 pdump: explicitly disabled via build config 00:02:52.021 proc-info: explicitly disabled via build config 00:02:52.021 test-acl: explicitly disabled via build config 00:02:52.021 test-bbdev: explicitly disabled via build config 00:02:52.021 test-cmdline: explicitly disabled via build config 00:02:52.021 test-compress-perf: explicitly disabled via build config 00:02:52.021 test-crypto-perf: explicitly disabled via build config 00:02:52.021 test-dma-perf: explicitly disabled via build config 00:02:52.021 test-eventdev: explicitly disabled via build config 00:02:52.021 test-fib: explicitly disabled via build config 00:02:52.021 test-flow-perf: explicitly disabled via build config 00:02:52.021 test-gpudev: explicitly disabled via build config 00:02:52.021 test-mldev: explicitly disabled via build config 00:02:52.021 test-pipeline: explicitly disabled via build config 00:02:52.021 test-pmd: explicitly disabled via build config 00:02:52.021 test-regex: explicitly disabled via build config 00:02:52.021 test-sad: explicitly disabled via build config 00:02:52.021 test-security-perf: explicitly disabled via build config 00:02:52.021 00:02:52.021 libs: 00:02:52.021 argparse: explicitly disabled via build config 00:02:52.021 metrics: explicitly disabled via build config 00:02:52.021 acl: explicitly disabled via build config 00:02:52.021 bbdev: explicitly disabled via build config 00:02:52.021 bitratestats: explicitly disabled via build config 00:02:52.021 bpf: explicitly disabled via build config 00:02:52.021 cfgfile: explicitly disabled via build config 00:02:52.021 distributor: explicitly disabled via build config 00:02:52.021 efd: explicitly disabled via build config 00:02:52.021 eventdev: explicitly disabled via build config 00:02:52.021 dispatcher: explicitly disabled via build config 00:02:52.021 gpudev: explicitly disabled via build config 00:02:52.021 gro: explicitly disabled via build config 00:02:52.021 gso: explicitly disabled via build config 00:02:52.021 ip_frag: explicitly disabled via build config 00:02:52.021 jobstats: explicitly disabled via build config 00:02:52.021 latencystats: explicitly disabled via build config 00:02:52.021 lpm: explicitly disabled via build config 00:02:52.021 member: explicitly disabled via build config 00:02:52.021 pcapng: explicitly disabled via build config 00:02:52.021 rawdev: explicitly disabled via build config 00:02:52.021 regexdev: explicitly disabled via build config 00:02:52.021 mldev: explicitly disabled via build config 00:02:52.021 rib: explicitly disabled via build config 00:02:52.021 sched: explicitly disabled via build config 00:02:52.021 stack: explicitly disabled via build config 00:02:52.021 ipsec: explicitly disabled via build config 00:02:52.021 pdcp: explicitly disabled via build config 00:02:52.021 fib: explicitly disabled via build config 00:02:52.021 port: explicitly disabled via build config 00:02:52.021 pdump: explicitly disabled via build config 00:02:52.021 table: explicitly disabled via build config 00:02:52.021 pipeline: explicitly disabled via build config 00:02:52.021 graph: explicitly disabled via build config 00:02:52.021 node: explicitly disabled via build config 00:02:52.021 00:02:52.021 drivers: 00:02:52.021 common/cpt: not in enabled drivers build config 00:02:52.021 common/dpaax: not in enabled drivers build config 00:02:52.021 common/iavf: not in enabled drivers build config 00:02:52.021 common/idpf: not in enabled drivers build config 00:02:52.021 common/ionic: not in enabled drivers build config 00:02:52.021 common/mvep: not in enabled drivers build config 00:02:52.021 common/octeontx: not in enabled drivers build config 00:02:52.021 bus/auxiliary: not in enabled drivers build config 00:02:52.021 bus/cdx: not in enabled drivers build config 00:02:52.021 bus/dpaa: not in enabled drivers build config 00:02:52.021 bus/fslmc: not in enabled drivers build config 00:02:52.021 bus/ifpga: not in enabled drivers build config 00:02:52.021 bus/platform: not in enabled drivers build config 00:02:52.021 bus/uacce: not in enabled drivers build config 00:02:52.021 bus/vmbus: not in enabled drivers build config 00:02:52.021 common/cnxk: not in enabled drivers build config 00:02:52.021 common/mlx5: not in enabled drivers build config 00:02:52.021 common/nfp: not in enabled drivers build config 00:02:52.021 common/nitrox: not in enabled drivers build config 00:02:52.021 common/qat: not in enabled drivers build config 00:02:52.021 common/sfc_efx: not in enabled drivers build config 00:02:52.021 mempool/bucket: not in enabled drivers build config 00:02:52.021 mempool/cnxk: not in enabled drivers build config 00:02:52.021 mempool/dpaa: not in enabled drivers build config 00:02:52.021 mempool/dpaa2: not in enabled drivers build config 00:02:52.021 mempool/octeontx: not in enabled drivers build config 00:02:52.021 mempool/stack: not in enabled drivers build config 00:02:52.021 dma/cnxk: not in enabled drivers build config 00:02:52.021 dma/dpaa: not in enabled drivers build config 00:02:52.021 dma/dpaa2: not in enabled drivers build config 00:02:52.021 dma/hisilicon: not in enabled drivers build config 00:02:52.021 dma/idxd: not in enabled drivers build config 00:02:52.021 dma/ioat: not in enabled drivers build config 00:02:52.021 dma/skeleton: not in enabled drivers build config 00:02:52.021 net/af_packet: not in enabled drivers build config 00:02:52.021 net/af_xdp: not in enabled drivers build config 00:02:52.021 net/ark: not in enabled drivers build config 00:02:52.021 net/atlantic: not in enabled drivers build config 00:02:52.021 net/avp: not in enabled drivers build config 00:02:52.021 net/axgbe: not in enabled drivers build config 00:02:52.021 net/bnx2x: not in enabled drivers build config 00:02:52.021 net/bnxt: not in enabled drivers build config 00:02:52.021 net/bonding: not in enabled drivers build config 00:02:52.021 net/cnxk: not in enabled drivers build config 00:02:52.021 net/cpfl: not in enabled drivers build config 00:02:52.021 net/cxgbe: not in enabled drivers build config 00:02:52.021 net/dpaa: not in enabled drivers build config 00:02:52.021 net/dpaa2: not in enabled drivers build config 00:02:52.021 net/e1000: not in enabled drivers build config 00:02:52.021 net/ena: not in enabled drivers build config 00:02:52.021 net/enetc: not in enabled drivers build config 00:02:52.021 net/enetfec: not in enabled drivers build config 00:02:52.021 net/enic: not in enabled drivers build config 00:02:52.021 net/failsafe: not in enabled drivers build config 00:02:52.021 net/fm10k: not in enabled drivers build config 00:02:52.021 net/gve: not in enabled drivers build config 00:02:52.021 net/hinic: not in enabled drivers build config 00:02:52.021 net/hns3: not in enabled drivers build config 00:02:52.021 net/i40e: not in enabled drivers build config 00:02:52.021 net/iavf: not in enabled drivers build config 00:02:52.021 net/ice: not in enabled drivers build config 00:02:52.021 net/idpf: not in enabled drivers build config 00:02:52.021 net/igc: not in enabled drivers build config 00:02:52.021 net/ionic: not in enabled drivers build config 00:02:52.021 net/ipn3ke: not in enabled drivers build config 00:02:52.021 net/ixgbe: not in enabled drivers build config 00:02:52.021 net/mana: not in enabled drivers build config 00:02:52.021 net/memif: not in enabled drivers build config 00:02:52.021 net/mlx4: not in enabled drivers build config 00:02:52.021 net/mlx5: not in enabled drivers build config 00:02:52.021 net/mvneta: not in enabled drivers build config 00:02:52.021 net/mvpp2: not in enabled drivers build config 00:02:52.021 net/netvsc: not in enabled drivers build config 00:02:52.021 net/nfb: not in enabled drivers build config 00:02:52.021 net/nfp: not in enabled drivers build config 00:02:52.021 net/ngbe: not in enabled drivers build config 00:02:52.021 net/null: not in enabled drivers build config 00:02:52.021 net/octeontx: not in enabled drivers build config 00:02:52.021 net/octeon_ep: not in enabled drivers build config 00:02:52.021 net/pcap: not in enabled drivers build config 00:02:52.021 net/pfe: not in enabled drivers build config 00:02:52.021 net/qede: not in enabled drivers build config 00:02:52.021 net/ring: not in enabled drivers build config 00:02:52.021 net/sfc: not in enabled drivers build config 00:02:52.021 net/softnic: not in enabled drivers build config 00:02:52.021 net/tap: not in enabled drivers build config 00:02:52.021 net/thunderx: not in enabled drivers build config 00:02:52.021 net/txgbe: not in enabled drivers build config 00:02:52.021 net/vdev_netvsc: not in enabled drivers build config 00:02:52.021 net/vhost: not in enabled drivers build config 00:02:52.021 net/virtio: not in enabled drivers build config 00:02:52.021 net/vmxnet3: not in enabled drivers build config 00:02:52.021 raw/*: missing internal dependency, "rawdev" 00:02:52.021 crypto/armv8: not in enabled drivers build config 00:02:52.021 crypto/bcmfs: not in enabled drivers build config 00:02:52.021 crypto/caam_jr: not in enabled drivers build config 00:02:52.021 crypto/ccp: not in enabled drivers build config 00:02:52.021 crypto/cnxk: not in enabled drivers build config 00:02:52.021 crypto/dpaa_sec: not in enabled drivers build config 00:02:52.021 crypto/dpaa2_sec: not in enabled drivers build config 00:02:52.021 crypto/ipsec_mb: not in enabled drivers build config 00:02:52.021 crypto/mlx5: not in enabled drivers build config 00:02:52.021 crypto/mvsam: not in enabled drivers build config 00:02:52.021 crypto/nitrox: not in enabled drivers build config 00:02:52.021 crypto/null: not in enabled drivers build config 00:02:52.021 crypto/octeontx: not in enabled drivers build config 00:02:52.021 crypto/openssl: not in enabled drivers build config 00:02:52.021 crypto/scheduler: not in enabled drivers build config 00:02:52.021 crypto/uadk: not in enabled drivers build config 00:02:52.021 crypto/virtio: not in enabled drivers build config 00:02:52.021 compress/isal: not in enabled drivers build config 00:02:52.021 compress/mlx5: not in enabled drivers build config 00:02:52.021 compress/nitrox: not in enabled drivers build config 00:02:52.021 compress/octeontx: not in enabled drivers build config 00:02:52.021 compress/zlib: not in enabled drivers build config 00:02:52.021 regex/*: missing internal dependency, "regexdev" 00:02:52.021 ml/*: missing internal dependency, "mldev" 00:02:52.021 vdpa/ifc: not in enabled drivers build config 00:02:52.021 vdpa/mlx5: not in enabled drivers build config 00:02:52.021 vdpa/nfp: not in enabled drivers build config 00:02:52.021 vdpa/sfc: not in enabled drivers build config 00:02:52.021 event/*: missing internal dependency, "eventdev" 00:02:52.021 baseband/*: missing internal dependency, "bbdev" 00:02:52.021 gpu/*: missing internal dependency, "gpudev" 00:02:52.021 00:02:52.021 00:02:52.021 Build targets in project: 85 00:02:52.021 00:02:52.021 DPDK 24.03.0 00:02:52.021 00:02:52.021 User defined options 00:02:52.021 buildtype : debug 00:02:52.021 default_library : shared 00:02:52.021 libdir : lib 00:02:52.021 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:52.021 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:52.021 c_link_args : 00:02:52.021 cpu_instruction_set: native 00:02:52.021 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:52.021 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:52.021 enable_docs : false 00:02:52.021 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:52.021 enable_kmods : false 00:02:52.021 max_lcores : 128 00:02:52.021 tests : false 00:02:52.021 00:02:52.021 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.595 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:52.595 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:52.595 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:52.595 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:52.595 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:52.595 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:52.595 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:52.595 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:52.595 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:52.595 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:52.595 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:52.596 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:52.596 [12/268] Linking static target lib/librte_kvargs.a 00:02:52.596 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:52.596 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:52.596 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:52.596 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:52.596 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:52.596 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:52.856 [19/268] Linking static target lib/librte_log.a 00:02:52.856 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:52.856 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:52.856 [22/268] Linking static target lib/librte_pci.a 00:02:52.856 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:52.856 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:53.114 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:53.114 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:53.114 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:53.114 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:53.114 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:53.114 [30/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:53.114 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:53.114 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:53.114 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:53.114 [34/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:53.114 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:53.114 [36/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:53.114 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:53.115 [38/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:53.115 [39/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:53.115 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:53.115 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:53.115 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:53.115 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:53.115 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:53.115 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:53.115 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:53.115 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:53.115 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:53.115 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:53.115 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:53.115 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:53.115 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:53.115 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:53.115 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:53.115 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:53.115 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:53.115 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:53.115 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:53.115 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:53.115 [60/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:53.115 [61/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:53.115 [62/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:53.115 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:53.115 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:53.115 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:53.115 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:53.115 [67/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:53.115 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:53.115 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:53.115 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:53.115 [71/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:53.115 [72/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:53.115 [73/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:53.115 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:53.115 [75/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:53.115 [76/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:53.115 [77/268] Linking static target lib/librte_ring.a 00:02:53.115 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:53.115 [79/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:53.115 [80/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.115 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:53.115 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:53.115 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:53.115 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:53.115 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:53.115 [86/268] Linking static target lib/librte_meter.a 00:02:53.115 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:53.115 [88/268] Linking static target lib/librte_telemetry.a 00:02:53.115 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:53.115 [90/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:53.115 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:53.374 [92/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:53.374 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:53.374 [94/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:53.374 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:53.374 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:53.374 [97/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:53.374 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:53.374 [99/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:53.374 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:53.374 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:53.374 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:53.374 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:53.374 [104/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.374 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:53.374 [106/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:53.374 [107/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.374 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:53.374 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:53.374 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:53.374 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:53.374 [112/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:53.374 [113/268] Linking static target lib/librte_mempool.a 00:02:53.374 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:53.374 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:53.374 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:53.374 [117/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:53.374 [118/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:53.374 [119/268] Linking static target lib/librte_net.a 00:02:53.374 [120/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:53.374 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:53.374 [122/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:53.374 [123/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:53.374 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:53.374 [125/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:53.374 [126/268] Linking static target lib/librte_rcu.a 00:02:53.374 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:53.374 [128/268] Linking static target lib/librte_eal.a 00:02:53.374 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:53.374 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:53.374 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:53.374 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:53.374 [133/268] Linking static target lib/librte_cmdline.a 00:02:53.374 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:53.374 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.634 [136/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:53.634 [137/268] Linking static target lib/librte_mbuf.a 00:02:53.634 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.634 [139/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.634 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:53.634 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:53.634 [142/268] Linking target lib/librte_log.so.24.1 00:02:53.634 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:53.634 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:53.634 [145/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:53.634 [146/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:53.634 [147/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:53.634 [148/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:53.634 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:53.634 [150/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.634 [151/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:53.634 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:53.634 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:53.634 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:53.634 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:53.634 [156/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.634 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:53.634 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:53.634 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.634 [160/268] Linking static target lib/librte_timer.a 00:02:53.634 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:53.634 [162/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:53.634 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:53.634 [164/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.634 [165/268] Linking static target lib/librte_dmadev.a 00:02:53.634 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:53.635 [167/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:53.635 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.635 [169/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:53.635 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:53.635 [171/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.635 [172/268] Linking static target lib/librte_compressdev.a 00:02:53.635 [173/268] Linking target lib/librte_kvargs.so.24.1 00:02:53.893 [174/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.893 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:53.893 [176/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:53.893 [177/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:53.893 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:53.893 [179/268] Linking target lib/librte_telemetry.so.24.1 00:02:53.893 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:53.893 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:53.893 [182/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.893 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.893 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:53.893 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:53.893 [186/268] Linking static target lib/librte_reorder.a 00:02:53.893 [187/268] Linking static target lib/librte_power.a 00:02:53.893 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:53.893 [189/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:53.893 [190/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:53.894 [191/268] Linking static target lib/librte_hash.a 00:02:53.894 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.894 [193/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:53.894 [194/268] Linking static target lib/librte_security.a 00:02:53.894 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:53.894 [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:53.894 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:53.894 [198/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:53.894 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.894 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.153 [201/268] Linking static target drivers/librte_bus_vdev.a 00:02:54.153 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.153 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:54.153 [204/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.153 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.153 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.153 [207/268] Linking static target drivers/librte_bus_pci.a 00:02:54.153 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:54.153 [209/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.153 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.153 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.153 [212/268] Linking static target drivers/librte_mempool_ring.a 00:02:54.153 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:54.153 [214/268] Linking static target lib/librte_cryptodev.a 00:02:54.153 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.412 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.412 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.412 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.412 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.412 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:54.412 [221/268] Linking static target lib/librte_ethdev.a 00:02:54.672 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.672 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.672 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:54.672 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.931 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.931 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.867 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:55.867 [229/268] Linking static target lib/librte_vhost.a 00:02:56.126 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.505 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.801 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.394 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.394 [234/268] Linking target lib/librte_eal.so.24.1 00:03:03.653 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:03.653 [236/268] Linking target lib/librte_ring.so.24.1 00:03:03.653 [237/268] Linking target lib/librte_meter.so.24.1 00:03:03.653 [238/268] Linking target lib/librte_pci.so.24.1 00:03:03.653 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:03.653 [240/268] Linking target lib/librte_dmadev.so.24.1 00:03:03.653 [241/268] Linking target lib/librte_timer.so.24.1 00:03:03.653 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:03.653 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:03.653 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:03.653 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:03.653 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:03.912 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:03.912 [248/268] Linking target lib/librte_mempool.so.24.1 00:03:03.912 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:03.912 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:03.912 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:03.912 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:03.912 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:04.172 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:04.172 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:04.172 [256/268] Linking target lib/librte_net.so.24.1 00:03:04.172 [257/268] Linking target lib/librte_reorder.so.24.1 00:03:04.172 [258/268] Linking target lib/librte_compressdev.so.24.1 00:03:04.172 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:04.172 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:04.431 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:04.431 [262/268] Linking target lib/librte_security.so.24.1 00:03:04.431 [263/268] Linking target lib/librte_hash.so.24.1 00:03:04.431 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:04.431 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:04.431 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:04.432 [267/268] Linking target lib/librte_power.so.24.1 00:03:04.690 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:04.690 INFO: autodetecting backend as ninja 00:03:04.690 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:16.896 CC lib/ut_mock/mock.o 00:03:16.896 CC lib/log/log.o 00:03:16.896 CC lib/log/log_flags.o 00:03:16.896 CC lib/log/log_deprecated.o 00:03:16.896 CC lib/ut/ut.o 00:03:16.896 LIB libspdk_ut_mock.a 00:03:16.896 LIB libspdk_log.a 00:03:16.896 LIB libspdk_ut.a 00:03:16.896 SO libspdk_ut_mock.so.6.0 00:03:16.896 SO libspdk_log.so.7.0 00:03:16.896 SO libspdk_ut.so.2.0 00:03:16.896 SYMLINK libspdk_ut_mock.so 00:03:16.896 SYMLINK libspdk_log.so 00:03:16.896 SYMLINK libspdk_ut.so 00:03:16.896 CXX lib/trace_parser/trace.o 00:03:16.896 CC lib/ioat/ioat.o 00:03:16.896 CC lib/dma/dma.o 00:03:16.896 CC lib/util/base64.o 00:03:16.896 CC lib/util/bit_array.o 00:03:16.896 CC lib/util/cpuset.o 00:03:16.897 CC lib/util/crc16.o 00:03:16.897 CC lib/util/crc32.o 00:03:16.897 CC lib/util/crc32c.o 00:03:16.897 CC lib/util/crc32_ieee.o 00:03:16.897 CC lib/util/crc64.o 00:03:16.897 CC lib/util/dif.o 00:03:16.897 CC lib/util/fd.o 00:03:16.897 CC lib/util/fd_group.o 00:03:16.897 CC lib/util/file.o 00:03:16.897 CC lib/util/hexlify.o 00:03:16.897 CC lib/util/iov.o 00:03:16.897 CC lib/util/math.o 00:03:16.897 CC lib/util/net.o 00:03:16.897 CC lib/util/pipe.o 00:03:16.897 CC lib/util/strerror_tls.o 00:03:16.897 CC lib/util/string.o 00:03:16.897 CC lib/util/uuid.o 00:03:16.897 CC lib/util/xor.o 00:03:16.897 CC lib/util/zipf.o 00:03:16.897 CC lib/util/md5.o 00:03:16.897 CC lib/vfio_user/host/vfio_user.o 00:03:16.897 CC lib/vfio_user/host/vfio_user_pci.o 00:03:16.897 LIB libspdk_dma.a 00:03:16.897 SO libspdk_dma.so.5.0 00:03:16.897 LIB libspdk_ioat.a 00:03:16.897 SO libspdk_ioat.so.7.0 00:03:16.897 SYMLINK libspdk_dma.so 00:03:16.897 SYMLINK libspdk_ioat.so 00:03:16.897 LIB libspdk_vfio_user.a 00:03:16.897 SO libspdk_vfio_user.so.5.0 00:03:16.897 LIB libspdk_util.a 00:03:16.897 SYMLINK libspdk_vfio_user.so 00:03:16.897 SO libspdk_util.so.10.0 00:03:16.897 SYMLINK libspdk_util.so 00:03:16.897 LIB libspdk_trace_parser.a 00:03:16.897 SO libspdk_trace_parser.so.6.0 00:03:16.897 SYMLINK libspdk_trace_parser.so 00:03:16.897 CC lib/rdma_utils/rdma_utils.o 00:03:16.897 CC lib/rdma_provider/common.o 00:03:16.897 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:16.897 CC lib/idxd/idxd.o 00:03:16.897 CC lib/idxd/idxd_user.o 00:03:16.897 CC lib/idxd/idxd_kernel.o 00:03:16.897 CC lib/json/json_parse.o 00:03:16.897 CC lib/json/json_util.o 00:03:16.897 CC lib/vmd/vmd.o 00:03:16.897 CC lib/env_dpdk/env.o 00:03:16.897 CC lib/json/json_write.o 00:03:16.897 CC lib/env_dpdk/memory.o 00:03:16.897 CC lib/vmd/led.o 00:03:16.897 CC lib/conf/conf.o 00:03:16.897 CC lib/env_dpdk/pci.o 00:03:16.897 CC lib/env_dpdk/init.o 00:03:16.897 CC lib/env_dpdk/threads.o 00:03:16.897 CC lib/env_dpdk/pci_ioat.o 00:03:16.897 CC lib/env_dpdk/pci_virtio.o 00:03:16.897 CC lib/env_dpdk/pci_vmd.o 00:03:16.897 CC lib/env_dpdk/pci_event.o 00:03:16.897 CC lib/env_dpdk/pci_idxd.o 00:03:16.897 CC lib/env_dpdk/sigbus_handler.o 00:03:16.897 CC lib/env_dpdk/pci_dpdk.o 00:03:16.897 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:16.897 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:16.897 LIB libspdk_rdma_provider.a 00:03:16.897 LIB libspdk_conf.a 00:03:16.897 SO libspdk_rdma_provider.so.6.0 00:03:16.897 SO libspdk_conf.so.6.0 00:03:16.897 LIB libspdk_rdma_utils.a 00:03:16.897 LIB libspdk_json.a 00:03:16.897 SO libspdk_rdma_utils.so.1.0 00:03:16.897 SYMLINK libspdk_rdma_provider.so 00:03:16.897 SYMLINK libspdk_conf.so 00:03:16.897 SO libspdk_json.so.6.0 00:03:17.156 SYMLINK libspdk_rdma_utils.so 00:03:17.156 SYMLINK libspdk_json.so 00:03:17.156 LIB libspdk_idxd.a 00:03:17.156 LIB libspdk_vmd.a 00:03:17.156 SO libspdk_idxd.so.12.1 00:03:17.156 SO libspdk_vmd.so.6.0 00:03:17.414 SYMLINK libspdk_idxd.so 00:03:17.414 SYMLINK libspdk_vmd.so 00:03:17.414 CC lib/jsonrpc/jsonrpc_server.o 00:03:17.414 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:17.414 CC lib/jsonrpc/jsonrpc_client.o 00:03:17.414 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:17.672 LIB libspdk_jsonrpc.a 00:03:17.672 SO libspdk_jsonrpc.so.6.0 00:03:17.672 SYMLINK libspdk_jsonrpc.so 00:03:17.672 LIB libspdk_env_dpdk.a 00:03:17.672 SO libspdk_env_dpdk.so.15.0 00:03:17.931 SYMLINK libspdk_env_dpdk.so 00:03:17.931 CC lib/rpc/rpc.o 00:03:18.190 LIB libspdk_rpc.a 00:03:18.190 SO libspdk_rpc.so.6.0 00:03:18.190 SYMLINK libspdk_rpc.so 00:03:18.450 CC lib/trace/trace.o 00:03:18.450 CC lib/notify/notify.o 00:03:18.450 CC lib/trace/trace_flags.o 00:03:18.450 CC lib/notify/notify_rpc.o 00:03:18.450 CC lib/trace/trace_rpc.o 00:03:18.450 CC lib/keyring/keyring.o 00:03:18.450 CC lib/keyring/keyring_rpc.o 00:03:18.709 LIB libspdk_notify.a 00:03:18.709 SO libspdk_notify.so.6.0 00:03:18.709 LIB libspdk_trace.a 00:03:18.709 LIB libspdk_keyring.a 00:03:18.709 SO libspdk_trace.so.11.0 00:03:18.709 SO libspdk_keyring.so.2.0 00:03:18.709 SYMLINK libspdk_notify.so 00:03:18.968 SYMLINK libspdk_trace.so 00:03:18.968 SYMLINK libspdk_keyring.so 00:03:19.227 CC lib/thread/thread.o 00:03:19.227 CC lib/thread/iobuf.o 00:03:19.227 CC lib/sock/sock.o 00:03:19.227 CC lib/sock/sock_rpc.o 00:03:19.485 LIB libspdk_sock.a 00:03:19.485 SO libspdk_sock.so.10.0 00:03:19.485 SYMLINK libspdk_sock.so 00:03:19.744 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:19.744 CC lib/nvme/nvme_ctrlr.o 00:03:19.744 CC lib/nvme/nvme_fabric.o 00:03:19.744 CC lib/nvme/nvme_ns_cmd.o 00:03:19.744 CC lib/nvme/nvme_ns.o 00:03:19.744 CC lib/nvme/nvme_pcie_common.o 00:03:19.744 CC lib/nvme/nvme_pcie.o 00:03:19.744 CC lib/nvme/nvme_qpair.o 00:03:19.744 CC lib/nvme/nvme.o 00:03:19.744 CC lib/nvme/nvme_quirks.o 00:03:19.744 CC lib/nvme/nvme_transport.o 00:03:19.744 CC lib/nvme/nvme_discovery.o 00:03:19.744 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:19.744 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:19.744 CC lib/nvme/nvme_tcp.o 00:03:19.744 CC lib/nvme/nvme_opal.o 00:03:19.744 CC lib/nvme/nvme_io_msg.o 00:03:19.744 CC lib/nvme/nvme_poll_group.o 00:03:20.002 CC lib/nvme/nvme_zns.o 00:03:20.002 CC lib/nvme/nvme_stubs.o 00:03:20.002 CC lib/nvme/nvme_auth.o 00:03:20.002 CC lib/nvme/nvme_cuse.o 00:03:20.002 CC lib/nvme/nvme_vfio_user.o 00:03:20.002 CC lib/nvme/nvme_rdma.o 00:03:20.259 LIB libspdk_thread.a 00:03:20.259 SO libspdk_thread.so.10.1 00:03:20.259 SYMLINK libspdk_thread.so 00:03:20.517 CC lib/blob/blobstore.o 00:03:20.517 CC lib/blob/request.o 00:03:20.517 CC lib/blob/zeroes.o 00:03:20.517 CC lib/blob/blob_bs_dev.o 00:03:20.775 CC lib/accel/accel.o 00:03:20.775 CC lib/accel/accel_rpc.o 00:03:20.775 CC lib/accel/accel_sw.o 00:03:20.775 CC lib/vfu_tgt/tgt_endpoint.o 00:03:20.775 CC lib/vfu_tgt/tgt_rpc.o 00:03:20.775 CC lib/init/json_config.o 00:03:20.775 CC lib/fsdev/fsdev.o 00:03:20.775 CC lib/init/subsystem.o 00:03:20.775 CC lib/init/subsystem_rpc.o 00:03:20.776 CC lib/fsdev/fsdev_io.o 00:03:20.776 CC lib/init/rpc.o 00:03:20.776 CC lib/virtio/virtio.o 00:03:20.776 CC lib/fsdev/fsdev_rpc.o 00:03:20.776 CC lib/virtio/virtio_vhost_user.o 00:03:20.776 CC lib/virtio/virtio_vfio_user.o 00:03:20.776 CC lib/virtio/virtio_pci.o 00:03:20.776 LIB libspdk_init.a 00:03:21.034 SO libspdk_init.so.6.0 00:03:21.034 LIB libspdk_vfu_tgt.a 00:03:21.034 LIB libspdk_virtio.a 00:03:21.034 SYMLINK libspdk_init.so 00:03:21.034 SO libspdk_vfu_tgt.so.3.0 00:03:21.034 SO libspdk_virtio.so.7.0 00:03:21.034 SYMLINK libspdk_vfu_tgt.so 00:03:21.034 SYMLINK libspdk_virtio.so 00:03:21.034 LIB libspdk_fsdev.a 00:03:21.294 SO libspdk_fsdev.so.1.0 00:03:21.294 CC lib/event/app.o 00:03:21.294 SYMLINK libspdk_fsdev.so 00:03:21.294 CC lib/event/reactor.o 00:03:21.294 CC lib/event/log_rpc.o 00:03:21.294 CC lib/event/app_rpc.o 00:03:21.294 CC lib/event/scheduler_static.o 00:03:21.553 LIB libspdk_accel.a 00:03:21.553 SO libspdk_accel.so.16.0 00:03:21.553 LIB libspdk_nvme.a 00:03:21.553 SYMLINK libspdk_accel.so 00:03:21.553 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:21.553 LIB libspdk_event.a 00:03:21.553 SO libspdk_nvme.so.14.0 00:03:21.553 SO libspdk_event.so.14.0 00:03:21.812 SYMLINK libspdk_event.so 00:03:21.812 SYMLINK libspdk_nvme.so 00:03:21.812 CC lib/bdev/bdev.o 00:03:21.812 CC lib/bdev/bdev_rpc.o 00:03:21.812 CC lib/bdev/bdev_zone.o 00:03:21.812 CC lib/bdev/part.o 00:03:21.812 CC lib/bdev/scsi_nvme.o 00:03:22.071 LIB libspdk_fuse_dispatcher.a 00:03:22.071 SO libspdk_fuse_dispatcher.so.1.0 00:03:22.071 SYMLINK libspdk_fuse_dispatcher.so 00:03:23.008 LIB libspdk_blob.a 00:03:23.008 SO libspdk_blob.so.11.0 00:03:23.008 SYMLINK libspdk_blob.so 00:03:23.267 CC lib/lvol/lvol.o 00:03:23.267 CC lib/blobfs/blobfs.o 00:03:23.267 CC lib/blobfs/tree.o 00:03:23.525 LIB libspdk_bdev.a 00:03:23.784 SO libspdk_bdev.so.16.0 00:03:23.784 SYMLINK libspdk_bdev.so 00:03:23.784 LIB libspdk_blobfs.a 00:03:23.784 LIB libspdk_lvol.a 00:03:23.784 SO libspdk_blobfs.so.10.0 00:03:23.784 SO libspdk_lvol.so.10.0 00:03:24.043 SYMLINK libspdk_blobfs.so 00:03:24.043 SYMLINK libspdk_lvol.so 00:03:24.043 CC lib/nbd/nbd.o 00:03:24.043 CC lib/nbd/nbd_rpc.o 00:03:24.043 CC lib/nvmf/ctrlr.o 00:03:24.043 CC lib/nvmf/ctrlr_discovery.o 00:03:24.043 CC lib/nvmf/ctrlr_bdev.o 00:03:24.043 CC lib/nvmf/subsystem.o 00:03:24.043 CC lib/nvmf/nvmf.o 00:03:24.043 CC lib/scsi/dev.o 00:03:24.043 CC lib/nvmf/nvmf_rpc.o 00:03:24.043 CC lib/scsi/lun.o 00:03:24.043 CC lib/ublk/ublk.o 00:03:24.043 CC lib/nvmf/transport.o 00:03:24.043 CC lib/scsi/port.o 00:03:24.043 CC lib/ublk/ublk_rpc.o 00:03:24.043 CC lib/ftl/ftl_core.o 00:03:24.043 CC lib/scsi/scsi.o 00:03:24.043 CC lib/nvmf/stubs.o 00:03:24.043 CC lib/ftl/ftl_init.o 00:03:24.043 CC lib/scsi/scsi_bdev.o 00:03:24.043 CC lib/nvmf/tcp.o 00:03:24.043 CC lib/nvmf/vfio_user.o 00:03:24.043 CC lib/nvmf/mdns_server.o 00:03:24.043 CC lib/scsi/scsi_pr.o 00:03:24.043 CC lib/ftl/ftl_layout.o 00:03:24.043 CC lib/ftl/ftl_debug.o 00:03:24.043 CC lib/nvmf/rdma.o 00:03:24.043 CC lib/nvmf/auth.o 00:03:24.043 CC lib/ftl/ftl_io.o 00:03:24.043 CC lib/scsi/scsi_rpc.o 00:03:24.043 CC lib/scsi/task.o 00:03:24.043 CC lib/ftl/ftl_sb.o 00:03:24.043 CC lib/ftl/ftl_l2p.o 00:03:24.043 CC lib/ftl/ftl_l2p_flat.o 00:03:24.043 CC lib/ftl/ftl_band.o 00:03:24.043 CC lib/ftl/ftl_nv_cache.o 00:03:24.043 CC lib/ftl/ftl_writer.o 00:03:24.043 CC lib/ftl/ftl_band_ops.o 00:03:24.043 CC lib/ftl/ftl_rq.o 00:03:24.043 CC lib/ftl/ftl_reloc.o 00:03:24.043 CC lib/ftl/ftl_l2p_cache.o 00:03:24.043 CC lib/ftl/ftl_p2l.o 00:03:24.043 CC lib/ftl/ftl_p2l_log.o 00:03:24.043 CC lib/ftl/mngt/ftl_mngt.o 00:03:24.043 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:24.043 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:24.043 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:24.043 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:24.043 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:24.043 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:24.043 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:24.043 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:24.043 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:24.043 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:24.043 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:24.043 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:24.043 CC lib/ftl/utils/ftl_conf.o 00:03:24.043 CC lib/ftl/utils/ftl_md.o 00:03:24.043 CC lib/ftl/utils/ftl_mempool.o 00:03:24.043 CC lib/ftl/utils/ftl_property.o 00:03:24.043 CC lib/ftl/utils/ftl_bitmap.o 00:03:24.043 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:24.043 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:24.043 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:24.043 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:24.043 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:24.043 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:24.043 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:24.043 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:24.043 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:24.043 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:24.043 CC lib/ftl/base/ftl_base_dev.o 00:03:24.043 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:24.043 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:24.043 CC lib/ftl/base/ftl_base_bdev.o 00:03:24.043 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:24.043 CC lib/ftl/ftl_trace.o 00:03:24.612 LIB libspdk_nbd.a 00:03:24.612 SO libspdk_nbd.so.7.0 00:03:24.612 SYMLINK libspdk_nbd.so 00:03:24.871 LIB libspdk_scsi.a 00:03:24.871 SO libspdk_scsi.so.9.0 00:03:24.871 LIB libspdk_ublk.a 00:03:24.871 SO libspdk_ublk.so.3.0 00:03:24.871 SYMLINK libspdk_scsi.so 00:03:24.871 SYMLINK libspdk_ublk.so 00:03:25.130 LIB libspdk_ftl.a 00:03:25.130 CC lib/iscsi/conn.o 00:03:25.130 CC lib/iscsi/init_grp.o 00:03:25.130 CC lib/iscsi/iscsi.o 00:03:25.130 CC lib/vhost/vhost.o 00:03:25.130 CC lib/iscsi/param.o 00:03:25.130 CC lib/vhost/vhost_rpc.o 00:03:25.130 CC lib/iscsi/portal_grp.o 00:03:25.130 CC lib/iscsi/tgt_node.o 00:03:25.130 CC lib/vhost/vhost_scsi.o 00:03:25.130 CC lib/iscsi/iscsi_subsystem.o 00:03:25.130 CC lib/vhost/vhost_blk.o 00:03:25.130 CC lib/iscsi/iscsi_rpc.o 00:03:25.130 CC lib/vhost/rte_vhost_user.o 00:03:25.130 CC lib/iscsi/task.o 00:03:25.130 SO libspdk_ftl.so.9.0 00:03:25.389 SYMLINK libspdk_ftl.so 00:03:25.957 LIB libspdk_nvmf.a 00:03:25.957 SO libspdk_nvmf.so.19.0 00:03:25.957 LIB libspdk_vhost.a 00:03:25.957 SO libspdk_vhost.so.8.0 00:03:26.216 SYMLINK libspdk_nvmf.so 00:03:26.216 SYMLINK libspdk_vhost.so 00:03:26.216 LIB libspdk_iscsi.a 00:03:26.216 SO libspdk_iscsi.so.8.0 00:03:26.216 SYMLINK libspdk_iscsi.so 00:03:26.784 CC module/env_dpdk/env_dpdk_rpc.o 00:03:26.784 CC module/vfu_device/vfu_virtio.o 00:03:26.784 CC module/vfu_device/vfu_virtio_blk.o 00:03:26.784 CC module/vfu_device/vfu_virtio_scsi.o 00:03:26.784 CC module/vfu_device/vfu_virtio_rpc.o 00:03:26.784 CC module/vfu_device/vfu_virtio_fs.o 00:03:27.042 LIB libspdk_env_dpdk_rpc.a 00:03:27.042 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:27.042 CC module/keyring/file/keyring.o 00:03:27.042 CC module/keyring/file/keyring_rpc.o 00:03:27.042 CC module/keyring/linux/keyring.o 00:03:27.042 CC module/sock/posix/posix.o 00:03:27.042 CC module/keyring/linux/keyring_rpc.o 00:03:27.042 CC module/fsdev/aio/fsdev_aio.o 00:03:27.042 CC module/fsdev/aio/linux_aio_mgr.o 00:03:27.042 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:27.042 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:27.042 CC module/accel/ioat/accel_ioat_rpc.o 00:03:27.042 CC module/blob/bdev/blob_bdev.o 00:03:27.042 CC module/accel/ioat/accel_ioat.o 00:03:27.042 CC module/scheduler/gscheduler/gscheduler.o 00:03:27.042 CC module/accel/error/accel_error.o 00:03:27.042 CC module/accel/error/accel_error_rpc.o 00:03:27.042 CC module/accel/dsa/accel_dsa.o 00:03:27.042 CC module/accel/dsa/accel_dsa_rpc.o 00:03:27.042 CC module/accel/iaa/accel_iaa.o 00:03:27.042 CC module/accel/iaa/accel_iaa_rpc.o 00:03:27.042 SO libspdk_env_dpdk_rpc.so.6.0 00:03:27.042 SYMLINK libspdk_env_dpdk_rpc.so 00:03:27.042 LIB libspdk_scheduler_gscheduler.a 00:03:27.042 LIB libspdk_keyring_file.a 00:03:27.042 LIB libspdk_scheduler_dpdk_governor.a 00:03:27.042 LIB libspdk_keyring_linux.a 00:03:27.301 LIB libspdk_accel_ioat.a 00:03:27.301 SO libspdk_scheduler_gscheduler.so.4.0 00:03:27.302 SO libspdk_keyring_linux.so.1.0 00:03:27.302 LIB libspdk_accel_error.a 00:03:27.302 SO libspdk_keyring_file.so.2.0 00:03:27.302 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:27.302 SO libspdk_accel_ioat.so.6.0 00:03:27.302 LIB libspdk_scheduler_dynamic.a 00:03:27.302 SO libspdk_accel_error.so.2.0 00:03:27.302 LIB libspdk_accel_iaa.a 00:03:27.302 SYMLINK libspdk_scheduler_gscheduler.so 00:03:27.302 SYMLINK libspdk_keyring_file.so 00:03:27.302 SYMLINK libspdk_keyring_linux.so 00:03:27.302 SO libspdk_scheduler_dynamic.so.4.0 00:03:27.302 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:27.302 SO libspdk_accel_iaa.so.3.0 00:03:27.302 LIB libspdk_accel_dsa.a 00:03:27.302 LIB libspdk_blob_bdev.a 00:03:27.302 SYMLINK libspdk_accel_ioat.so 00:03:27.302 SYMLINK libspdk_accel_error.so 00:03:27.302 SO libspdk_blob_bdev.so.11.0 00:03:27.302 SO libspdk_accel_dsa.so.5.0 00:03:27.302 SYMLINK libspdk_scheduler_dynamic.so 00:03:27.302 SYMLINK libspdk_accel_iaa.so 00:03:27.302 SYMLINK libspdk_blob_bdev.so 00:03:27.302 SYMLINK libspdk_accel_dsa.so 00:03:27.302 LIB libspdk_vfu_device.a 00:03:27.560 SO libspdk_vfu_device.so.3.0 00:03:27.560 SYMLINK libspdk_vfu_device.so 00:03:27.560 LIB libspdk_fsdev_aio.a 00:03:27.560 SO libspdk_fsdev_aio.so.1.0 00:03:27.560 LIB libspdk_sock_posix.a 00:03:27.560 SO libspdk_sock_posix.so.6.0 00:03:27.560 SYMLINK libspdk_fsdev_aio.so 00:03:27.819 SYMLINK libspdk_sock_posix.so 00:03:27.819 CC module/bdev/error/vbdev_error_rpc.o 00:03:27.819 CC module/bdev/error/vbdev_error.o 00:03:27.819 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:27.819 CC module/bdev/raid/bdev_raid.o 00:03:27.819 CC module/bdev/lvol/vbdev_lvol.o 00:03:27.819 CC module/bdev/raid/bdev_raid_rpc.o 00:03:27.819 CC module/bdev/gpt/gpt.o 00:03:27.819 CC module/bdev/gpt/vbdev_gpt.o 00:03:27.819 CC module/bdev/raid/bdev_raid_sb.o 00:03:27.819 CC module/bdev/raid/raid1.o 00:03:27.819 CC module/bdev/raid/raid0.o 00:03:27.819 CC module/bdev/raid/concat.o 00:03:27.819 CC module/bdev/ftl/bdev_ftl.o 00:03:27.819 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:27.819 CC module/bdev/split/vbdev_split.o 00:03:27.819 CC module/bdev/split/vbdev_split_rpc.o 00:03:27.819 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:27.819 CC module/bdev/iscsi/bdev_iscsi.o 00:03:27.819 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:27.819 CC module/bdev/delay/vbdev_delay.o 00:03:27.819 CC module/bdev/null/bdev_null.o 00:03:27.819 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:27.819 CC module/bdev/null/bdev_null_rpc.o 00:03:27.819 CC module/bdev/aio/bdev_aio.o 00:03:27.819 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:27.819 CC module/bdev/malloc/bdev_malloc.o 00:03:27.819 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:27.819 CC module/bdev/nvme/bdev_nvme.o 00:03:27.819 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:27.819 CC module/bdev/aio/bdev_aio_rpc.o 00:03:27.819 CC module/bdev/nvme/nvme_rpc.o 00:03:27.819 CC module/bdev/passthru/vbdev_passthru.o 00:03:27.819 CC module/bdev/nvme/bdev_mdns_client.o 00:03:27.819 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:27.819 CC module/bdev/nvme/vbdev_opal.o 00:03:27.819 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:27.819 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:27.819 CC module/blobfs/bdev/blobfs_bdev.o 00:03:27.819 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:27.819 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:27.819 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:27.819 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:28.078 LIB libspdk_blobfs_bdev.a 00:03:28.078 LIB libspdk_bdev_split.a 00:03:28.078 LIB libspdk_bdev_error.a 00:03:28.078 SO libspdk_bdev_split.so.6.0 00:03:28.078 SO libspdk_blobfs_bdev.so.6.0 00:03:28.078 LIB libspdk_bdev_ftl.a 00:03:28.078 SO libspdk_bdev_error.so.6.0 00:03:28.078 LIB libspdk_bdev_passthru.a 00:03:28.078 LIB libspdk_bdev_gpt.a 00:03:28.078 LIB libspdk_bdev_null.a 00:03:28.078 SO libspdk_bdev_ftl.so.6.0 00:03:28.078 SYMLINK libspdk_blobfs_bdev.so 00:03:28.078 SYMLINK libspdk_bdev_split.so 00:03:28.078 LIB libspdk_bdev_zone_block.a 00:03:28.078 SO libspdk_bdev_passthru.so.6.0 00:03:28.078 SO libspdk_bdev_null.so.6.0 00:03:28.078 SO libspdk_bdev_gpt.so.6.0 00:03:28.078 LIB libspdk_bdev_aio.a 00:03:28.078 LIB libspdk_bdev_malloc.a 00:03:28.078 SO libspdk_bdev_zone_block.so.6.0 00:03:28.078 SYMLINK libspdk_bdev_error.so 00:03:28.336 LIB libspdk_bdev_iscsi.a 00:03:28.336 SO libspdk_bdev_aio.so.6.0 00:03:28.336 SYMLINK libspdk_bdev_ftl.so 00:03:28.336 SO libspdk_bdev_malloc.so.6.0 00:03:28.336 SYMLINK libspdk_bdev_gpt.so 00:03:28.336 SO libspdk_bdev_iscsi.so.6.0 00:03:28.336 SYMLINK libspdk_bdev_passthru.so 00:03:28.336 SYMLINK libspdk_bdev_null.so 00:03:28.336 LIB libspdk_bdev_delay.a 00:03:28.336 SYMLINK libspdk_bdev_zone_block.so 00:03:28.336 SO libspdk_bdev_delay.so.6.0 00:03:28.336 SYMLINK libspdk_bdev_aio.so 00:03:28.336 LIB libspdk_bdev_lvol.a 00:03:28.336 SYMLINK libspdk_bdev_malloc.so 00:03:28.336 SYMLINK libspdk_bdev_iscsi.so 00:03:28.336 LIB libspdk_bdev_virtio.a 00:03:28.336 SO libspdk_bdev_lvol.so.6.0 00:03:28.336 SYMLINK libspdk_bdev_delay.so 00:03:28.336 SO libspdk_bdev_virtio.so.6.0 00:03:28.336 SYMLINK libspdk_bdev_lvol.so 00:03:28.336 SYMLINK libspdk_bdev_virtio.so 00:03:28.594 LIB libspdk_bdev_raid.a 00:03:28.594 SO libspdk_bdev_raid.so.6.0 00:03:28.853 SYMLINK libspdk_bdev_raid.so 00:03:29.420 LIB libspdk_bdev_nvme.a 00:03:29.679 SO libspdk_bdev_nvme.so.7.0 00:03:29.679 SYMLINK libspdk_bdev_nvme.so 00:03:30.246 CC module/event/subsystems/iobuf/iobuf.o 00:03:30.246 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:30.246 CC module/event/subsystems/sock/sock.o 00:03:30.246 CC module/event/subsystems/keyring/keyring.o 00:03:30.246 CC module/event/subsystems/vmd/vmd.o 00:03:30.246 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:30.246 CC module/event/subsystems/scheduler/scheduler.o 00:03:30.246 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:30.246 CC module/event/subsystems/fsdev/fsdev.o 00:03:30.246 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:30.505 LIB libspdk_event_vhost_blk.a 00:03:30.505 LIB libspdk_event_keyring.a 00:03:30.505 LIB libspdk_event_vfu_tgt.a 00:03:30.505 LIB libspdk_event_sock.a 00:03:30.505 LIB libspdk_event_iobuf.a 00:03:30.505 LIB libspdk_event_vmd.a 00:03:30.505 LIB libspdk_event_fsdev.a 00:03:30.505 LIB libspdk_event_scheduler.a 00:03:30.505 SO libspdk_event_sock.so.5.0 00:03:30.505 SO libspdk_event_vhost_blk.so.3.0 00:03:30.505 SO libspdk_event_keyring.so.1.0 00:03:30.505 SO libspdk_event_vfu_tgt.so.3.0 00:03:30.505 SO libspdk_event_iobuf.so.3.0 00:03:30.505 SO libspdk_event_fsdev.so.1.0 00:03:30.505 SO libspdk_event_vmd.so.6.0 00:03:30.505 SO libspdk_event_scheduler.so.4.0 00:03:30.505 SYMLINK libspdk_event_sock.so 00:03:30.505 SYMLINK libspdk_event_keyring.so 00:03:30.505 SYMLINK libspdk_event_vhost_blk.so 00:03:30.505 SYMLINK libspdk_event_vfu_tgt.so 00:03:30.505 SYMLINK libspdk_event_fsdev.so 00:03:30.505 SYMLINK libspdk_event_iobuf.so 00:03:30.505 SYMLINK libspdk_event_scheduler.so 00:03:30.505 SYMLINK libspdk_event_vmd.so 00:03:30.764 CC module/event/subsystems/accel/accel.o 00:03:31.023 LIB libspdk_event_accel.a 00:03:31.023 SO libspdk_event_accel.so.6.0 00:03:31.023 SYMLINK libspdk_event_accel.so 00:03:31.282 CC module/event/subsystems/bdev/bdev.o 00:03:31.542 LIB libspdk_event_bdev.a 00:03:31.542 SO libspdk_event_bdev.so.6.0 00:03:31.542 SYMLINK libspdk_event_bdev.so 00:03:32.110 CC module/event/subsystems/scsi/scsi.o 00:03:32.110 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:32.110 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:32.110 CC module/event/subsystems/ublk/ublk.o 00:03:32.110 CC module/event/subsystems/nbd/nbd.o 00:03:32.110 LIB libspdk_event_ublk.a 00:03:32.110 LIB libspdk_event_scsi.a 00:03:32.110 LIB libspdk_event_nbd.a 00:03:32.110 SO libspdk_event_scsi.so.6.0 00:03:32.110 SO libspdk_event_ublk.so.3.0 00:03:32.110 SO libspdk_event_nbd.so.6.0 00:03:32.110 LIB libspdk_event_nvmf.a 00:03:32.110 SYMLINK libspdk_event_scsi.so 00:03:32.110 SYMLINK libspdk_event_ublk.so 00:03:32.110 SYMLINK libspdk_event_nbd.so 00:03:32.110 SO libspdk_event_nvmf.so.6.0 00:03:32.368 SYMLINK libspdk_event_nvmf.so 00:03:32.368 CC module/event/subsystems/iscsi/iscsi.o 00:03:32.368 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:32.628 LIB libspdk_event_vhost_scsi.a 00:03:32.628 LIB libspdk_event_iscsi.a 00:03:32.628 SO libspdk_event_vhost_scsi.so.3.0 00:03:32.628 SO libspdk_event_iscsi.so.6.0 00:03:32.628 SYMLINK libspdk_event_vhost_scsi.so 00:03:32.628 SYMLINK libspdk_event_iscsi.so 00:03:32.887 SO libspdk.so.6.0 00:03:32.887 SYMLINK libspdk.so 00:03:33.148 CXX app/trace/trace.o 00:03:33.148 CC app/trace_record/trace_record.o 00:03:33.148 CC app/spdk_nvme_discover/discovery_aer.o 00:03:33.148 CC app/spdk_lspci/spdk_lspci.o 00:03:33.148 CC test/rpc_client/rpc_client_test.o 00:03:33.148 TEST_HEADER include/spdk/accel_module.h 00:03:33.148 TEST_HEADER include/spdk/accel.h 00:03:33.148 CC app/spdk_nvme_perf/perf.o 00:03:33.148 TEST_HEADER include/spdk/assert.h 00:03:33.148 CC app/spdk_top/spdk_top.o 00:03:33.148 TEST_HEADER include/spdk/barrier.h 00:03:33.148 TEST_HEADER include/spdk/bdev.h 00:03:33.148 TEST_HEADER include/spdk/base64.h 00:03:33.148 TEST_HEADER include/spdk/bdev_module.h 00:03:33.148 TEST_HEADER include/spdk/bdev_zone.h 00:03:33.148 CC app/spdk_nvme_identify/identify.o 00:03:33.148 TEST_HEADER include/spdk/bit_pool.h 00:03:33.148 TEST_HEADER include/spdk/blob_bdev.h 00:03:33.148 TEST_HEADER include/spdk/bit_array.h 00:03:33.148 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:33.148 TEST_HEADER include/spdk/blobfs.h 00:03:33.148 TEST_HEADER include/spdk/blob.h 00:03:33.148 TEST_HEADER include/spdk/conf.h 00:03:33.148 TEST_HEADER include/spdk/config.h 00:03:33.148 TEST_HEADER include/spdk/cpuset.h 00:03:33.148 TEST_HEADER include/spdk/crc32.h 00:03:33.148 TEST_HEADER include/spdk/crc16.h 00:03:33.148 TEST_HEADER include/spdk/crc64.h 00:03:33.148 TEST_HEADER include/spdk/dif.h 00:03:33.148 TEST_HEADER include/spdk/dma.h 00:03:33.148 TEST_HEADER include/spdk/env_dpdk.h 00:03:33.148 TEST_HEADER include/spdk/env.h 00:03:33.415 TEST_HEADER include/spdk/event.h 00:03:33.416 TEST_HEADER include/spdk/endian.h 00:03:33.416 TEST_HEADER include/spdk/fd_group.h 00:03:33.416 TEST_HEADER include/spdk/fd.h 00:03:33.416 TEST_HEADER include/spdk/file.h 00:03:33.416 TEST_HEADER include/spdk/fsdev.h 00:03:33.416 TEST_HEADER include/spdk/fsdev_module.h 00:03:33.416 TEST_HEADER include/spdk/ftl.h 00:03:33.416 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:33.416 TEST_HEADER include/spdk/hexlify.h 00:03:33.416 TEST_HEADER include/spdk/gpt_spec.h 00:03:33.416 TEST_HEADER include/spdk/histogram_data.h 00:03:33.416 TEST_HEADER include/spdk/idxd.h 00:03:33.416 TEST_HEADER include/spdk/idxd_spec.h 00:03:33.416 TEST_HEADER include/spdk/init.h 00:03:33.416 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:33.416 TEST_HEADER include/spdk/ioat.h 00:03:33.416 TEST_HEADER include/spdk/iscsi_spec.h 00:03:33.416 TEST_HEADER include/spdk/ioat_spec.h 00:03:33.416 TEST_HEADER include/spdk/jsonrpc.h 00:03:33.416 TEST_HEADER include/spdk/keyring.h 00:03:33.416 TEST_HEADER include/spdk/json.h 00:03:33.416 TEST_HEADER include/spdk/keyring_module.h 00:03:33.416 CC app/spdk_dd/spdk_dd.o 00:03:33.416 TEST_HEADER include/spdk/log.h 00:03:33.416 TEST_HEADER include/spdk/likely.h 00:03:33.416 TEST_HEADER include/spdk/lvol.h 00:03:33.416 TEST_HEADER include/spdk/md5.h 00:03:33.416 CC app/iscsi_tgt/iscsi_tgt.o 00:03:33.416 TEST_HEADER include/spdk/nbd.h 00:03:33.416 TEST_HEADER include/spdk/memory.h 00:03:33.416 TEST_HEADER include/spdk/mmio.h 00:03:33.416 TEST_HEADER include/spdk/net.h 00:03:33.416 TEST_HEADER include/spdk/nvme.h 00:03:33.416 TEST_HEADER include/spdk/notify.h 00:03:33.416 TEST_HEADER include/spdk/nvme_intel.h 00:03:33.416 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:33.416 CC app/nvmf_tgt/nvmf_main.o 00:03:33.416 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:33.416 TEST_HEADER include/spdk/nvme_spec.h 00:03:33.416 TEST_HEADER include/spdk/nvme_zns.h 00:03:33.416 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:33.416 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:33.416 TEST_HEADER include/spdk/nvmf_spec.h 00:03:33.416 TEST_HEADER include/spdk/nvmf.h 00:03:33.416 TEST_HEADER include/spdk/opal.h 00:03:33.416 TEST_HEADER include/spdk/nvmf_transport.h 00:03:33.416 TEST_HEADER include/spdk/opal_spec.h 00:03:33.416 TEST_HEADER include/spdk/pci_ids.h 00:03:33.416 TEST_HEADER include/spdk/pipe.h 00:03:33.416 TEST_HEADER include/spdk/queue.h 00:03:33.416 TEST_HEADER include/spdk/reduce.h 00:03:33.416 TEST_HEADER include/spdk/rpc.h 00:03:33.416 TEST_HEADER include/spdk/scsi.h 00:03:33.416 TEST_HEADER include/spdk/scheduler.h 00:03:33.416 TEST_HEADER include/spdk/scsi_spec.h 00:03:33.416 TEST_HEADER include/spdk/sock.h 00:03:33.416 TEST_HEADER include/spdk/stdinc.h 00:03:33.416 TEST_HEADER include/spdk/trace.h 00:03:33.416 TEST_HEADER include/spdk/string.h 00:03:33.416 CC app/spdk_tgt/spdk_tgt.o 00:03:33.416 TEST_HEADER include/spdk/tree.h 00:03:33.416 TEST_HEADER include/spdk/thread.h 00:03:33.416 TEST_HEADER include/spdk/trace_parser.h 00:03:33.416 TEST_HEADER include/spdk/ublk.h 00:03:33.416 TEST_HEADER include/spdk/util.h 00:03:33.416 TEST_HEADER include/spdk/version.h 00:03:33.416 TEST_HEADER include/spdk/uuid.h 00:03:33.416 TEST_HEADER include/spdk/vhost.h 00:03:33.416 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:33.416 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:33.416 TEST_HEADER include/spdk/xor.h 00:03:33.416 TEST_HEADER include/spdk/vmd.h 00:03:33.416 TEST_HEADER include/spdk/zipf.h 00:03:33.416 CXX test/cpp_headers/accel.o 00:03:33.416 CXX test/cpp_headers/accel_module.o 00:03:33.416 CXX test/cpp_headers/assert.o 00:03:33.416 CXX test/cpp_headers/barrier.o 00:03:33.416 CXX test/cpp_headers/base64.o 00:03:33.416 CXX test/cpp_headers/bdev.o 00:03:33.416 CXX test/cpp_headers/bdev_module.o 00:03:33.416 CXX test/cpp_headers/bdev_zone.o 00:03:33.416 CXX test/cpp_headers/bit_pool.o 00:03:33.416 CXX test/cpp_headers/bit_array.o 00:03:33.416 CXX test/cpp_headers/blob_bdev.o 00:03:33.416 CXX test/cpp_headers/blob.o 00:03:33.416 CXX test/cpp_headers/blobfs_bdev.o 00:03:33.416 CXX test/cpp_headers/conf.o 00:03:33.416 CXX test/cpp_headers/blobfs.o 00:03:33.416 CXX test/cpp_headers/cpuset.o 00:03:33.416 CXX test/cpp_headers/config.o 00:03:33.416 CXX test/cpp_headers/crc16.o 00:03:33.416 CXX test/cpp_headers/crc32.o 00:03:33.416 CXX test/cpp_headers/crc64.o 00:03:33.416 CXX test/cpp_headers/dma.o 00:03:33.416 CXX test/cpp_headers/dif.o 00:03:33.416 CXX test/cpp_headers/env_dpdk.o 00:03:33.416 CXX test/cpp_headers/endian.o 00:03:33.416 CXX test/cpp_headers/env.o 00:03:33.416 CXX test/cpp_headers/event.o 00:03:33.416 CXX test/cpp_headers/file.o 00:03:33.416 CXX test/cpp_headers/fd_group.o 00:03:33.416 CXX test/cpp_headers/fd.o 00:03:33.416 CXX test/cpp_headers/fsdev.o 00:03:33.416 CXX test/cpp_headers/fsdev_module.o 00:03:33.416 CXX test/cpp_headers/fuse_dispatcher.o 00:03:33.416 CXX test/cpp_headers/ftl.o 00:03:33.416 CXX test/cpp_headers/gpt_spec.o 00:03:33.416 CXX test/cpp_headers/hexlify.o 00:03:33.416 CXX test/cpp_headers/histogram_data.o 00:03:33.416 CXX test/cpp_headers/idxd_spec.o 00:03:33.416 CXX test/cpp_headers/idxd.o 00:03:33.416 CXX test/cpp_headers/init.o 00:03:33.416 CXX test/cpp_headers/ioat.o 00:03:33.416 CXX test/cpp_headers/ioat_spec.o 00:03:33.416 CXX test/cpp_headers/iscsi_spec.o 00:03:33.416 CXX test/cpp_headers/json.o 00:03:33.416 CXX test/cpp_headers/jsonrpc.o 00:03:33.416 CXX test/cpp_headers/keyring.o 00:03:33.416 CXX test/cpp_headers/keyring_module.o 00:03:33.416 CXX test/cpp_headers/likely.o 00:03:33.416 CXX test/cpp_headers/log.o 00:03:33.416 CXX test/cpp_headers/lvol.o 00:03:33.416 CXX test/cpp_headers/md5.o 00:03:33.416 CXX test/cpp_headers/memory.o 00:03:33.416 CXX test/cpp_headers/nbd.o 00:03:33.416 CXX test/cpp_headers/mmio.o 00:03:33.416 CXX test/cpp_headers/notify.o 00:03:33.416 CXX test/cpp_headers/net.o 00:03:33.416 CXX test/cpp_headers/nvme_intel.o 00:03:33.416 CXX test/cpp_headers/nvme.o 00:03:33.416 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:33.416 CXX test/cpp_headers/nvme_ocssd.o 00:03:33.416 CXX test/cpp_headers/nvme_spec.o 00:03:33.416 CXX test/cpp_headers/nvmf_cmd.o 00:03:33.416 CXX test/cpp_headers/nvme_zns.o 00:03:33.416 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:33.416 CXX test/cpp_headers/nvmf.o 00:03:33.416 CXX test/cpp_headers/nvmf_spec.o 00:03:33.416 CXX test/cpp_headers/nvmf_transport.o 00:03:33.416 CXX test/cpp_headers/opal.o 00:03:33.416 CC test/thread/poller_perf/poller_perf.o 00:03:33.416 CC examples/util/zipf/zipf.o 00:03:33.416 CC test/app/histogram_perf/histogram_perf.o 00:03:33.416 CC examples/ioat/verify/verify.o 00:03:33.416 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:33.416 CC examples/ioat/perf/perf.o 00:03:33.416 CC app/fio/nvme/fio_plugin.o 00:03:33.416 CC test/app/jsoncat/jsoncat.o 00:03:33.416 CC test/app/stub/stub.o 00:03:33.416 CXX test/cpp_headers/opal_spec.o 00:03:33.416 CC test/env/vtophys/vtophys.o 00:03:33.416 CC test/env/memory/memory_ut.o 00:03:33.416 CC test/env/pci/pci_ut.o 00:03:33.416 CC test/dma/test_dma/test_dma.o 00:03:33.416 CC app/fio/bdev/fio_plugin.o 00:03:33.689 LINK spdk_lspci 00:03:33.689 CC test/app/bdev_svc/bdev_svc.o 00:03:33.689 LINK spdk_nvme_discover 00:03:33.689 LINK nvmf_tgt 00:03:33.689 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:33.689 LINK rpc_client_test 00:03:33.955 CC test/env/mem_callbacks/mem_callbacks.o 00:03:33.955 LINK spdk_trace_record 00:03:33.955 LINK interrupt_tgt 00:03:33.955 LINK zipf 00:03:33.955 LINK jsoncat 00:03:33.955 LINK env_dpdk_post_init 00:03:33.955 LINK vtophys 00:03:33.955 CXX test/cpp_headers/pci_ids.o 00:03:33.955 CXX test/cpp_headers/pipe.o 00:03:33.955 CXX test/cpp_headers/queue.o 00:03:33.955 CXX test/cpp_headers/reduce.o 00:03:33.955 LINK stub 00:03:33.955 CXX test/cpp_headers/rpc.o 00:03:33.955 CXX test/cpp_headers/scheduler.o 00:03:33.955 CXX test/cpp_headers/scsi.o 00:03:33.955 CXX test/cpp_headers/scsi_spec.o 00:03:33.955 CXX test/cpp_headers/sock.o 00:03:33.955 CXX test/cpp_headers/stdinc.o 00:03:33.955 CXX test/cpp_headers/string.o 00:03:33.955 CXX test/cpp_headers/thread.o 00:03:33.955 CXX test/cpp_headers/trace.o 00:03:33.955 CXX test/cpp_headers/trace_parser.o 00:03:33.955 LINK poller_perf 00:03:33.955 CXX test/cpp_headers/tree.o 00:03:33.955 LINK iscsi_tgt 00:03:33.955 CXX test/cpp_headers/ublk.o 00:03:33.955 LINK histogram_perf 00:03:33.955 CXX test/cpp_headers/util.o 00:03:33.955 CXX test/cpp_headers/uuid.o 00:03:33.955 CXX test/cpp_headers/vfio_user_pci.o 00:03:33.955 CXX test/cpp_headers/vfio_user_spec.o 00:03:33.955 CXX test/cpp_headers/version.o 00:03:33.955 CXX test/cpp_headers/vhost.o 00:03:33.955 CXX test/cpp_headers/vmd.o 00:03:33.955 CXX test/cpp_headers/xor.o 00:03:33.955 CXX test/cpp_headers/zipf.o 00:03:34.214 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:34.214 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:34.214 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:34.214 LINK spdk_tgt 00:03:34.214 LINK ioat_perf 00:03:34.214 LINK bdev_svc 00:03:34.214 LINK verify 00:03:34.214 LINK spdk_dd 00:03:34.214 LINK spdk_trace 00:03:34.214 LINK pci_ut 00:03:34.473 LINK spdk_bdev 00:03:34.473 CC examples/idxd/perf/perf.o 00:03:34.473 CC examples/vmd/led/led.o 00:03:34.473 CC examples/vmd/lsvmd/lsvmd.o 00:03:34.473 CC examples/sock/hello_world/hello_sock.o 00:03:34.473 LINK nvme_fuzz 00:03:34.473 LINK spdk_nvme 00:03:34.473 LINK test_dma 00:03:34.473 CC examples/thread/thread/thread_ex.o 00:03:34.473 CC test/event/reactor/reactor.o 00:03:34.473 CC test/event/reactor_perf/reactor_perf.o 00:03:34.473 CC test/event/event_perf/event_perf.o 00:03:34.473 LINK spdk_nvme_perf 00:03:34.473 CC test/event/app_repeat/app_repeat.o 00:03:34.473 LINK spdk_top 00:03:34.473 LINK mem_callbacks 00:03:34.473 LINK spdk_nvme_identify 00:03:34.473 LINK vhost_fuzz 00:03:34.473 CC test/event/scheduler/scheduler.o 00:03:34.473 LINK lsvmd 00:03:34.473 LINK led 00:03:34.733 CC app/vhost/vhost.o 00:03:34.733 LINK hello_sock 00:03:34.733 LINK reactor 00:03:34.733 LINK reactor_perf 00:03:34.733 LINK event_perf 00:03:34.733 LINK app_repeat 00:03:34.733 LINK idxd_perf 00:03:34.733 LINK thread 00:03:34.733 LINK scheduler 00:03:34.733 LINK vhost 00:03:34.992 CC test/nvme/aer/aer.o 00:03:34.992 CC test/nvme/sgl/sgl.o 00:03:34.992 CC test/nvme/overhead/overhead.o 00:03:34.992 CC test/nvme/compliance/nvme_compliance.o 00:03:34.992 CC test/nvme/connect_stress/connect_stress.o 00:03:34.992 CC test/nvme/fdp/fdp.o 00:03:34.992 CC test/nvme/cuse/cuse.o 00:03:34.992 CC test/nvme/fused_ordering/fused_ordering.o 00:03:34.992 CC test/nvme/err_injection/err_injection.o 00:03:34.992 CC test/nvme/startup/startup.o 00:03:34.992 CC test/nvme/e2edp/nvme_dp.o 00:03:34.992 CC test/nvme/reserve/reserve.o 00:03:34.992 CC test/nvme/boot_partition/boot_partition.o 00:03:34.992 CC test/nvme/simple_copy/simple_copy.o 00:03:34.992 CC test/nvme/reset/reset.o 00:03:34.992 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:34.992 CC test/accel/dif/dif.o 00:03:34.992 CC test/blobfs/mkfs/mkfs.o 00:03:34.992 LINK memory_ut 00:03:34.992 CC test/lvol/esnap/esnap.o 00:03:34.992 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:34.992 CC examples/nvme/hello_world/hello_world.o 00:03:34.992 CC examples/nvme/arbitration/arbitration.o 00:03:34.992 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:34.992 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:35.252 CC examples/nvme/hotplug/hotplug.o 00:03:35.252 CC examples/nvme/abort/abort.o 00:03:35.252 CC examples/nvme/reconnect/reconnect.o 00:03:35.252 LINK startup 00:03:35.252 LINK fused_ordering 00:03:35.252 LINK boot_partition 00:03:35.252 LINK err_injection 00:03:35.252 LINK reserve 00:03:35.252 LINK connect_stress 00:03:35.252 LINK doorbell_aers 00:03:35.252 LINK simple_copy 00:03:35.252 LINK sgl 00:03:35.252 LINK mkfs 00:03:35.252 LINK nvme_dp 00:03:35.252 LINK reset 00:03:35.252 LINK aer 00:03:35.252 LINK overhead 00:03:35.252 LINK nvme_compliance 00:03:35.252 CC examples/accel/perf/accel_perf.o 00:03:35.252 CC examples/blob/hello_world/hello_blob.o 00:03:35.252 LINK fdp 00:03:35.252 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:35.252 CC examples/blob/cli/blobcli.o 00:03:35.252 LINK pmr_persistence 00:03:35.252 LINK cmb_copy 00:03:35.252 LINK hello_world 00:03:35.509 LINK hotplug 00:03:35.509 LINK arbitration 00:03:35.509 LINK reconnect 00:03:35.509 LINK abort 00:03:35.509 LINK nvme_manage 00:03:35.509 LINK hello_fsdev 00:03:35.509 LINK iscsi_fuzz 00:03:35.509 LINK hello_blob 00:03:35.509 LINK dif 00:03:35.766 LINK accel_perf 00:03:35.766 LINK blobcli 00:03:36.025 LINK cuse 00:03:36.025 CC test/bdev/bdevio/bdevio.o 00:03:36.284 CC examples/bdev/hello_world/hello_bdev.o 00:03:36.284 CC examples/bdev/bdevperf/bdevperf.o 00:03:36.284 LINK hello_bdev 00:03:36.542 LINK bdevio 00:03:36.799 LINK bdevperf 00:03:37.368 CC examples/nvmf/nvmf/nvmf.o 00:03:37.627 LINK nvmf 00:03:38.565 LINK esnap 00:03:38.827 00:03:38.827 real 0m55.328s 00:03:38.827 user 8m15.666s 00:03:38.827 sys 3m37.988s 00:03:38.827 15:37:48 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:38.827 15:37:48 make -- common/autotest_common.sh@10 -- $ set +x 00:03:38.827 ************************************ 00:03:38.827 END TEST make 00:03:38.827 ************************************ 00:03:38.827 15:37:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:38.827 15:37:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:38.827 15:37:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:38.827 15:37:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.827 15:37:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:38.827 15:37:48 -- pm/common@44 -- $ pid=2157555 00:03:38.827 15:37:48 -- pm/common@50 -- $ kill -TERM 2157555 00:03:38.827 15:37:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.827 15:37:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:38.828 15:37:48 -- pm/common@44 -- $ pid=2157557 00:03:38.828 15:37:48 -- pm/common@50 -- $ kill -TERM 2157557 00:03:38.828 15:37:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.828 15:37:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:38.828 15:37:48 -- pm/common@44 -- $ pid=2157558 00:03:38.828 15:37:48 -- pm/common@50 -- $ kill -TERM 2157558 00:03:38.828 15:37:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.828 15:37:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:38.828 15:37:48 -- pm/common@44 -- $ pid=2157584 00:03:38.828 15:37:48 -- pm/common@50 -- $ sudo -E kill -TERM 2157584 00:03:38.828 15:37:49 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:38.828 15:37:49 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:38.828 15:37:49 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:39.087 15:37:49 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:39.087 15:37:49 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.087 15:37:49 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.087 15:37:49 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.087 15:37:49 -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.087 15:37:49 -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.087 15:37:49 -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.087 15:37:49 -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.087 15:37:49 -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.087 15:37:49 -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.087 15:37:49 -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.087 15:37:49 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.087 15:37:49 -- scripts/common.sh@344 -- # case "$op" in 00:03:39.087 15:37:49 -- scripts/common.sh@345 -- # : 1 00:03:39.087 15:37:49 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.087 15:37:49 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.087 15:37:49 -- scripts/common.sh@365 -- # decimal 1 00:03:39.087 15:37:49 -- scripts/common.sh@353 -- # local d=1 00:03:39.087 15:37:49 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.087 15:37:49 -- scripts/common.sh@355 -- # echo 1 00:03:39.087 15:37:49 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.087 15:37:49 -- scripts/common.sh@366 -- # decimal 2 00:03:39.087 15:37:49 -- scripts/common.sh@353 -- # local d=2 00:03:39.087 15:37:49 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.087 15:37:49 -- scripts/common.sh@355 -- # echo 2 00:03:39.087 15:37:49 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.087 15:37:49 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.087 15:37:49 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.087 15:37:49 -- scripts/common.sh@368 -- # return 0 00:03:39.087 15:37:49 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.087 15:37:49 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:39.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.087 --rc genhtml_branch_coverage=1 00:03:39.087 --rc genhtml_function_coverage=1 00:03:39.087 --rc genhtml_legend=1 00:03:39.087 --rc geninfo_all_blocks=1 00:03:39.087 --rc geninfo_unexecuted_blocks=1 00:03:39.087 00:03:39.087 ' 00:03:39.087 15:37:49 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:39.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.087 --rc genhtml_branch_coverage=1 00:03:39.087 --rc genhtml_function_coverage=1 00:03:39.087 --rc genhtml_legend=1 00:03:39.087 --rc geninfo_all_blocks=1 00:03:39.087 --rc geninfo_unexecuted_blocks=1 00:03:39.087 00:03:39.087 ' 00:03:39.087 15:37:49 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:39.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.087 --rc genhtml_branch_coverage=1 00:03:39.087 --rc genhtml_function_coverage=1 00:03:39.087 --rc genhtml_legend=1 00:03:39.087 --rc geninfo_all_blocks=1 00:03:39.087 --rc geninfo_unexecuted_blocks=1 00:03:39.087 00:03:39.087 ' 00:03:39.087 15:37:49 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:39.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.087 --rc genhtml_branch_coverage=1 00:03:39.087 --rc genhtml_function_coverage=1 00:03:39.087 --rc genhtml_legend=1 00:03:39.087 --rc geninfo_all_blocks=1 00:03:39.087 --rc geninfo_unexecuted_blocks=1 00:03:39.087 00:03:39.087 ' 00:03:39.087 15:37:49 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:39.087 15:37:49 -- nvmf/common.sh@7 -- # uname -s 00:03:39.087 15:37:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:39.087 15:37:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:39.087 15:37:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:39.087 15:37:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:39.087 15:37:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:39.087 15:37:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:39.087 15:37:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:39.087 15:37:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:39.087 15:37:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:39.087 15:37:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:39.087 15:37:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:39.087 15:37:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:39.087 15:37:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:39.087 15:37:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:39.087 15:37:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:39.087 15:37:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:39.087 15:37:49 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:39.087 15:37:49 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:39.087 15:37:49 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:39.087 15:37:49 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:39.087 15:37:49 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:39.087 15:37:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.087 15:37:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.087 15:37:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.087 15:37:49 -- paths/export.sh@5 -- # export PATH 00:03:39.087 15:37:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.087 15:37:49 -- nvmf/common.sh@51 -- # : 0 00:03:39.087 15:37:49 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:39.087 15:37:49 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:39.087 15:37:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:39.087 15:37:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:39.087 15:37:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:39.087 15:37:49 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:39.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:39.087 15:37:49 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:39.087 15:37:49 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:39.087 15:37:49 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:39.087 15:37:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:39.087 15:37:49 -- spdk/autotest.sh@32 -- # uname -s 00:03:39.087 15:37:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:39.087 15:37:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:39.087 15:37:49 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:39.087 15:37:49 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:39.087 15:37:49 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:39.087 15:37:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:39.087 15:37:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:39.087 15:37:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:39.087 15:37:49 -- spdk/autotest.sh@48 -- # udevadm_pid=2219785 00:03:39.087 15:37:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:39.087 15:37:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:39.087 15:37:49 -- pm/common@17 -- # local monitor 00:03:39.087 15:37:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.087 15:37:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.087 15:37:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.087 15:37:49 -- pm/common@21 -- # date +%s 00:03:39.087 15:37:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.087 15:37:49 -- pm/common@21 -- # date +%s 00:03:39.087 15:37:49 -- pm/common@25 -- # sleep 1 00:03:39.087 15:37:49 -- pm/common@21 -- # date +%s 00:03:39.087 15:37:49 -- pm/common@21 -- # date +%s 00:03:39.087 15:37:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727789869 00:03:39.087 15:37:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727789869 00:03:39.087 15:37:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727789869 00:03:39.087 15:37:49 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727789869 00:03:39.087 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727789869_collect-cpu-load.pm.log 00:03:39.087 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727789869_collect-vmstat.pm.log 00:03:39.087 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727789869_collect-cpu-temp.pm.log 00:03:39.087 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727789869_collect-bmc-pm.bmc.pm.log 00:03:40.024 15:37:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:40.024 15:37:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:40.024 15:37:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:40.024 15:37:50 -- common/autotest_common.sh@10 -- # set +x 00:03:40.024 15:37:50 -- spdk/autotest.sh@59 -- # create_test_list 00:03:40.024 15:37:50 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:40.024 15:37:50 -- common/autotest_common.sh@10 -- # set +x 00:03:40.024 15:37:50 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:40.024 15:37:50 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:40.024 15:37:50 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:40.024 15:37:50 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:40.024 15:37:50 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:40.024 15:37:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:40.024 15:37:50 -- common/autotest_common.sh@1455 -- # uname 00:03:40.283 15:37:50 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:40.283 15:37:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:40.283 15:37:50 -- common/autotest_common.sh@1475 -- # uname 00:03:40.283 15:37:50 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:40.283 15:37:50 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:40.283 15:37:50 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:40.283 lcov: LCOV version 1.15 00:03:40.283 15:37:50 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:52.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:52.490 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:04.698 15:38:14 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:04.698 15:38:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:04.698 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:04:04.698 15:38:14 -- spdk/autotest.sh@78 -- # rm -f 00:04:04.699 15:38:14 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.320 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:07.320 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:07.320 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:07.320 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:07.320 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:07.320 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:07.320 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:07.320 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:07.579 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:07.579 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:07.579 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:07.579 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:07.579 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:07.579 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:07.579 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:07.579 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:07.579 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:07.579 15:38:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:07.579 15:38:17 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:07.579 15:38:17 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:07.579 15:38:17 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:07.579 15:38:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:07.579 15:38:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:07.579 15:38:17 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:07.579 15:38:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:07.579 15:38:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:07.579 15:38:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:07.579 15:38:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.579 15:38:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:07.579 15:38:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:07.579 15:38:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:07.579 15:38:17 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:07.838 No valid GPT data, bailing 00:04:07.838 15:38:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:07.838 15:38:17 -- scripts/common.sh@394 -- # pt= 00:04:07.838 15:38:17 -- scripts/common.sh@395 -- # return 1 00:04:07.838 15:38:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:07.838 1+0 records in 00:04:07.838 1+0 records out 00:04:07.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00570206 s, 184 MB/s 00:04:07.838 15:38:17 -- spdk/autotest.sh@105 -- # sync 00:04:07.838 15:38:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:07.838 15:38:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:07.838 15:38:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:14.408 15:38:23 -- spdk/autotest.sh@111 -- # uname -s 00:04:14.408 15:38:23 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:14.408 15:38:23 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:14.408 15:38:23 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:16.376 Hugepages 00:04:16.376 node hugesize free / total 00:04:16.376 node0 1048576kB 0 / 0 00:04:16.376 node0 2048kB 0 / 0 00:04:16.376 node1 1048576kB 0 / 0 00:04:16.376 node1 2048kB 0 / 0 00:04:16.376 00:04:16.376 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:16.376 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:16.376 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:16.376 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:16.376 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:16.376 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:16.376 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:16.376 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:16.376 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:16.376 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:16.376 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:16.376 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:16.376 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:16.376 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:16.376 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:16.376 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:16.376 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:16.376 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:16.376 15:38:26 -- spdk/autotest.sh@117 -- # uname -s 00:04:16.376 15:38:26 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:16.376 15:38:26 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:16.377 15:38:26 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.661 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:19.661 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:20.595 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:20.595 15:38:30 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:21.970 15:38:31 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:21.970 15:38:31 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:21.970 15:38:31 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:21.970 15:38:31 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:21.970 15:38:31 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:21.970 15:38:31 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:21.970 15:38:31 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:21.970 15:38:31 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:21.970 15:38:31 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:21.970 15:38:31 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:21.970 15:38:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:21.970 15:38:31 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.526 Waiting for block devices as requested 00:04:24.526 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:24.784 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:24.784 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:24.784 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:25.043 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:25.043 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:25.043 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:25.043 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:25.302 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:25.302 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:25.302 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:25.560 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:25.560 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:25.560 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:25.819 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:25.819 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:25.819 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:25.819 15:38:36 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:25.819 15:38:36 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:26.077 15:38:36 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:26.077 15:38:36 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:04:26.077 15:38:36 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:26.077 15:38:36 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:26.077 15:38:36 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:26.077 15:38:36 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:26.077 15:38:36 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:26.077 15:38:36 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:26.077 15:38:36 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:26.077 15:38:36 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:26.077 15:38:36 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:26.077 15:38:36 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:04:26.077 15:38:36 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:26.077 15:38:36 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:26.077 15:38:36 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:26.077 15:38:36 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:26.077 15:38:36 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:26.077 15:38:36 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:26.077 15:38:36 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:26.077 15:38:36 -- common/autotest_common.sh@1541 -- # continue 00:04:26.077 15:38:36 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:26.077 15:38:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.077 15:38:36 -- common/autotest_common.sh@10 -- # set +x 00:04:26.077 15:38:36 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:26.077 15:38:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.077 15:38:36 -- common/autotest_common.sh@10 -- # set +x 00:04:26.077 15:38:36 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.366 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:29.366 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:30.321 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:30.321 15:38:40 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:30.321 15:38:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:30.321 15:38:40 -- common/autotest_common.sh@10 -- # set +x 00:04:30.580 15:38:40 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:30.580 15:38:40 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:30.580 15:38:40 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:30.580 15:38:40 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:30.580 15:38:40 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:30.580 15:38:40 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:30.580 15:38:40 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:30.580 15:38:40 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:30.580 15:38:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:30.580 15:38:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:30.580 15:38:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:30.580 15:38:40 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:30.580 15:38:40 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:30.580 15:38:40 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:30.580 15:38:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:30.580 15:38:40 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:30.580 15:38:40 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:30.580 15:38:40 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:30.580 15:38:40 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:30.580 15:38:40 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:30.580 15:38:40 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:30.580 15:38:40 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:04:30.580 15:38:40 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:04:30.580 15:38:40 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2234016 00:04:30.580 15:38:40 -- common/autotest_common.sh@1583 -- # waitforlisten 2234016 00:04:30.580 15:38:40 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.580 15:38:40 -- common/autotest_common.sh@831 -- # '[' -z 2234016 ']' 00:04:30.580 15:38:40 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.580 15:38:40 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.580 15:38:40 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.580 15:38:40 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.580 15:38:40 -- common/autotest_common.sh@10 -- # set +x 00:04:30.580 [2024-10-01 15:38:40.692244] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:04:30.580 [2024-10-01 15:38:40.692295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234016 ] 00:04:30.581 [2024-10-01 15:38:40.761587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.839 [2024-10-01 15:38:40.842902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.405 15:38:41 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.405 15:38:41 -- common/autotest_common.sh@864 -- # return 0 00:04:31.405 15:38:41 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:31.405 15:38:41 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:31.405 15:38:41 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:34.690 nvme0n1 00:04:34.690 15:38:44 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:34.690 [2024-10-01 15:38:44.695688] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:34.690 request: 00:04:34.690 { 00:04:34.690 "nvme_ctrlr_name": "nvme0", 00:04:34.690 "password": "test", 00:04:34.690 "method": "bdev_nvme_opal_revert", 00:04:34.690 "req_id": 1 00:04:34.690 } 00:04:34.690 Got JSON-RPC error response 00:04:34.690 response: 00:04:34.690 { 00:04:34.690 "code": -32602, 00:04:34.690 "message": "Invalid parameters" 00:04:34.690 } 00:04:34.690 15:38:44 -- common/autotest_common.sh@1589 -- # true 00:04:34.690 15:38:44 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:34.690 15:38:44 -- common/autotest_common.sh@1593 -- # killprocess 2234016 00:04:34.690 15:38:44 -- common/autotest_common.sh@950 -- # '[' -z 2234016 ']' 00:04:34.690 15:38:44 -- common/autotest_common.sh@954 -- # kill -0 2234016 00:04:34.690 15:38:44 -- common/autotest_common.sh@955 -- # uname 00:04:34.690 15:38:44 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:34.690 15:38:44 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2234016 00:04:34.690 15:38:44 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:34.690 15:38:44 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:34.690 15:38:44 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2234016' 00:04:34.690 killing process with pid 2234016 00:04:34.690 15:38:44 -- common/autotest_common.sh@969 -- # kill 2234016 00:04:34.690 15:38:44 -- common/autotest_common.sh@974 -- # wait 2234016 00:04:37.221 15:38:46 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:37.221 15:38:46 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:37.221 15:38:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:37.221 15:38:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:37.221 15:38:46 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:37.221 15:38:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:37.221 15:38:46 -- common/autotest_common.sh@10 -- # set +x 00:04:37.221 15:38:46 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:37.221 15:38:46 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:37.221 15:38:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.222 15:38:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.222 15:38:46 -- common/autotest_common.sh@10 -- # set +x 00:04:37.222 ************************************ 00:04:37.222 START TEST env 00:04:37.222 ************************************ 00:04:37.222 15:38:46 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:37.222 * Looking for test storage... 00:04:37.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:37.222 15:38:47 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:37.222 15:38:47 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:37.222 15:38:47 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:37.222 15:38:47 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:37.222 15:38:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.222 15:38:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.222 15:38:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.222 15:38:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.222 15:38:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.222 15:38:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.222 15:38:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.222 15:38:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.222 15:38:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.222 15:38:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.222 15:38:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.222 15:38:47 env -- scripts/common.sh@344 -- # case "$op" in 00:04:37.222 15:38:47 env -- scripts/common.sh@345 -- # : 1 00:04:37.222 15:38:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.222 15:38:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.222 15:38:47 env -- scripts/common.sh@365 -- # decimal 1 00:04:37.222 15:38:47 env -- scripts/common.sh@353 -- # local d=1 00:04:37.222 15:38:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.222 15:38:47 env -- scripts/common.sh@355 -- # echo 1 00:04:37.222 15:38:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.222 15:38:47 env -- scripts/common.sh@366 -- # decimal 2 00:04:37.222 15:38:47 env -- scripts/common.sh@353 -- # local d=2 00:04:37.222 15:38:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.222 15:38:47 env -- scripts/common.sh@355 -- # echo 2 00:04:37.222 15:38:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.222 15:38:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.222 15:38:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.222 15:38:47 env -- scripts/common.sh@368 -- # return 0 00:04:37.222 15:38:47 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.222 15:38:47 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:37.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.222 --rc genhtml_branch_coverage=1 00:04:37.222 --rc genhtml_function_coverage=1 00:04:37.222 --rc genhtml_legend=1 00:04:37.222 --rc geninfo_all_blocks=1 00:04:37.222 --rc geninfo_unexecuted_blocks=1 00:04:37.222 00:04:37.222 ' 00:04:37.222 15:38:47 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:37.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.222 --rc genhtml_branch_coverage=1 00:04:37.222 --rc genhtml_function_coverage=1 00:04:37.222 --rc genhtml_legend=1 00:04:37.222 --rc geninfo_all_blocks=1 00:04:37.222 --rc geninfo_unexecuted_blocks=1 00:04:37.222 00:04:37.222 ' 00:04:37.222 15:38:47 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:37.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.222 --rc genhtml_branch_coverage=1 00:04:37.222 --rc genhtml_function_coverage=1 00:04:37.222 --rc genhtml_legend=1 00:04:37.222 --rc geninfo_all_blocks=1 00:04:37.222 --rc geninfo_unexecuted_blocks=1 00:04:37.222 00:04:37.222 ' 00:04:37.222 15:38:47 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:37.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.222 --rc genhtml_branch_coverage=1 00:04:37.222 --rc genhtml_function_coverage=1 00:04:37.222 --rc genhtml_legend=1 00:04:37.222 --rc geninfo_all_blocks=1 00:04:37.222 --rc geninfo_unexecuted_blocks=1 00:04:37.222 00:04:37.222 ' 00:04:37.222 15:38:47 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:37.222 15:38:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.222 15:38:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.222 15:38:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.222 ************************************ 00:04:37.222 START TEST env_memory 00:04:37.222 ************************************ 00:04:37.222 15:38:47 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:37.222 00:04:37.222 00:04:37.222 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.222 http://cunit.sourceforge.net/ 00:04:37.222 00:04:37.222 00:04:37.222 Suite: memory 00:04:37.222 Test: alloc and free memory map ...[2024-10-01 15:38:47.182679] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:37.222 passed 00:04:37.222 Test: mem map translation ...[2024-10-01 15:38:47.200695] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:37.222 [2024-10-01 15:38:47.200713] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:37.222 [2024-10-01 15:38:47.200746] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:37.222 [2024-10-01 15:38:47.200752] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:37.222 passed 00:04:37.222 Test: mem map registration ...[2024-10-01 15:38:47.236822] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:37.222 [2024-10-01 15:38:47.236840] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:37.222 passed 00:04:37.222 Test: mem map adjacent registrations ...passed 00:04:37.222 00:04:37.222 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.222 suites 1 1 n/a 0 0 00:04:37.222 tests 4 4 4 0 0 00:04:37.222 asserts 152 152 152 0 n/a 00:04:37.222 00:04:37.222 Elapsed time = 0.133 seconds 00:04:37.222 00:04:37.222 real 0m0.146s 00:04:37.222 user 0m0.138s 00:04:37.222 sys 0m0.007s 00:04:37.222 15:38:47 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.222 15:38:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:37.222 ************************************ 00:04:37.222 END TEST env_memory 00:04:37.222 ************************************ 00:04:37.222 15:38:47 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:37.222 15:38:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.222 15:38:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.222 15:38:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.222 ************************************ 00:04:37.222 START TEST env_vtophys 00:04:37.222 ************************************ 00:04:37.222 15:38:47 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:37.222 EAL: lib.eal log level changed from notice to debug 00:04:37.222 EAL: Detected lcore 0 as core 0 on socket 0 00:04:37.222 EAL: Detected lcore 1 as core 1 on socket 0 00:04:37.222 EAL: Detected lcore 2 as core 2 on socket 0 00:04:37.222 EAL: Detected lcore 3 as core 3 on socket 0 00:04:37.222 EAL: Detected lcore 4 as core 4 on socket 0 00:04:37.222 EAL: Detected lcore 5 as core 5 on socket 0 00:04:37.222 EAL: Detected lcore 6 as core 6 on socket 0 00:04:37.222 EAL: Detected lcore 7 as core 8 on socket 0 00:04:37.222 EAL: Detected lcore 8 as core 9 on socket 0 00:04:37.222 EAL: Detected lcore 9 as core 10 on socket 0 00:04:37.222 EAL: Detected lcore 10 as core 11 on socket 0 00:04:37.222 EAL: Detected lcore 11 as core 12 on socket 0 00:04:37.222 EAL: Detected lcore 12 as core 13 on socket 0 00:04:37.222 EAL: Detected lcore 13 as core 16 on socket 0 00:04:37.222 EAL: Detected lcore 14 as core 17 on socket 0 00:04:37.222 EAL: Detected lcore 15 as core 18 on socket 0 00:04:37.222 EAL: Detected lcore 16 as core 19 on socket 0 00:04:37.222 EAL: Detected lcore 17 as core 20 on socket 0 00:04:37.222 EAL: Detected lcore 18 as core 21 on socket 0 00:04:37.222 EAL: Detected lcore 19 as core 25 on socket 0 00:04:37.222 EAL: Detected lcore 20 as core 26 on socket 0 00:04:37.222 EAL: Detected lcore 21 as core 27 on socket 0 00:04:37.222 EAL: Detected lcore 22 as core 28 on socket 0 00:04:37.222 EAL: Detected lcore 23 as core 29 on socket 0 00:04:37.222 EAL: Detected lcore 24 as core 0 on socket 1 00:04:37.222 EAL: Detected lcore 25 as core 1 on socket 1 00:04:37.222 EAL: Detected lcore 26 as core 2 on socket 1 00:04:37.222 EAL: Detected lcore 27 as core 3 on socket 1 00:04:37.222 EAL: Detected lcore 28 as core 4 on socket 1 00:04:37.222 EAL: Detected lcore 29 as core 5 on socket 1 00:04:37.222 EAL: Detected lcore 30 as core 6 on socket 1 00:04:37.222 EAL: Detected lcore 31 as core 8 on socket 1 00:04:37.222 EAL: Detected lcore 32 as core 10 on socket 1 00:04:37.222 EAL: Detected lcore 33 as core 11 on socket 1 00:04:37.222 EAL: Detected lcore 34 as core 12 on socket 1 00:04:37.222 EAL: Detected lcore 35 as core 13 on socket 1 00:04:37.222 EAL: Detected lcore 36 as core 16 on socket 1 00:04:37.222 EAL: Detected lcore 37 as core 17 on socket 1 00:04:37.222 EAL: Detected lcore 38 as core 18 on socket 1 00:04:37.222 EAL: Detected lcore 39 as core 19 on socket 1 00:04:37.222 EAL: Detected lcore 40 as core 20 on socket 1 00:04:37.222 EAL: Detected lcore 41 as core 21 on socket 1 00:04:37.222 EAL: Detected lcore 42 as core 24 on socket 1 00:04:37.222 EAL: Detected lcore 43 as core 25 on socket 1 00:04:37.222 EAL: Detected lcore 44 as core 26 on socket 1 00:04:37.222 EAL: Detected lcore 45 as core 27 on socket 1 00:04:37.222 EAL: Detected lcore 46 as core 28 on socket 1 00:04:37.222 EAL: Detected lcore 47 as core 29 on socket 1 00:04:37.223 EAL: Detected lcore 48 as core 0 on socket 0 00:04:37.223 EAL: Detected lcore 49 as core 1 on socket 0 00:04:37.223 EAL: Detected lcore 50 as core 2 on socket 0 00:04:37.223 EAL: Detected lcore 51 as core 3 on socket 0 00:04:37.223 EAL: Detected lcore 52 as core 4 on socket 0 00:04:37.223 EAL: Detected lcore 53 as core 5 on socket 0 00:04:37.223 EAL: Detected lcore 54 as core 6 on socket 0 00:04:37.223 EAL: Detected lcore 55 as core 8 on socket 0 00:04:37.223 EAL: Detected lcore 56 as core 9 on socket 0 00:04:37.223 EAL: Detected lcore 57 as core 10 on socket 0 00:04:37.223 EAL: Detected lcore 58 as core 11 on socket 0 00:04:37.223 EAL: Detected lcore 59 as core 12 on socket 0 00:04:37.223 EAL: Detected lcore 60 as core 13 on socket 0 00:04:37.223 EAL: Detected lcore 61 as core 16 on socket 0 00:04:37.223 EAL: Detected lcore 62 as core 17 on socket 0 00:04:37.223 EAL: Detected lcore 63 as core 18 on socket 0 00:04:37.223 EAL: Detected lcore 64 as core 19 on socket 0 00:04:37.223 EAL: Detected lcore 65 as core 20 on socket 0 00:04:37.223 EAL: Detected lcore 66 as core 21 on socket 0 00:04:37.223 EAL: Detected lcore 67 as core 25 on socket 0 00:04:37.223 EAL: Detected lcore 68 as core 26 on socket 0 00:04:37.223 EAL: Detected lcore 69 as core 27 on socket 0 00:04:37.223 EAL: Detected lcore 70 as core 28 on socket 0 00:04:37.223 EAL: Detected lcore 71 as core 29 on socket 0 00:04:37.223 EAL: Detected lcore 72 as core 0 on socket 1 00:04:37.223 EAL: Detected lcore 73 as core 1 on socket 1 00:04:37.223 EAL: Detected lcore 74 as core 2 on socket 1 00:04:37.223 EAL: Detected lcore 75 as core 3 on socket 1 00:04:37.223 EAL: Detected lcore 76 as core 4 on socket 1 00:04:37.223 EAL: Detected lcore 77 as core 5 on socket 1 00:04:37.223 EAL: Detected lcore 78 as core 6 on socket 1 00:04:37.223 EAL: Detected lcore 79 as core 8 on socket 1 00:04:37.223 EAL: Detected lcore 80 as core 10 on socket 1 00:04:37.223 EAL: Detected lcore 81 as core 11 on socket 1 00:04:37.223 EAL: Detected lcore 82 as core 12 on socket 1 00:04:37.223 EAL: Detected lcore 83 as core 13 on socket 1 00:04:37.223 EAL: Detected lcore 84 as core 16 on socket 1 00:04:37.223 EAL: Detected lcore 85 as core 17 on socket 1 00:04:37.223 EAL: Detected lcore 86 as core 18 on socket 1 00:04:37.223 EAL: Detected lcore 87 as core 19 on socket 1 00:04:37.223 EAL: Detected lcore 88 as core 20 on socket 1 00:04:37.223 EAL: Detected lcore 89 as core 21 on socket 1 00:04:37.223 EAL: Detected lcore 90 as core 24 on socket 1 00:04:37.223 EAL: Detected lcore 91 as core 25 on socket 1 00:04:37.223 EAL: Detected lcore 92 as core 26 on socket 1 00:04:37.223 EAL: Detected lcore 93 as core 27 on socket 1 00:04:37.223 EAL: Detected lcore 94 as core 28 on socket 1 00:04:37.223 EAL: Detected lcore 95 as core 29 on socket 1 00:04:37.223 EAL: Maximum logical cores by configuration: 128 00:04:37.223 EAL: Detected CPU lcores: 96 00:04:37.223 EAL: Detected NUMA nodes: 2 00:04:37.223 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:37.223 EAL: Detected shared linkage of DPDK 00:04:37.223 EAL: No shared files mode enabled, IPC will be disabled 00:04:37.223 EAL: Bus pci wants IOVA as 'DC' 00:04:37.223 EAL: Buses did not request a specific IOVA mode. 00:04:37.223 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:37.223 EAL: Selected IOVA mode 'VA' 00:04:37.223 EAL: Probing VFIO support... 00:04:37.223 EAL: IOMMU type 1 (Type 1) is supported 00:04:37.223 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:37.223 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:37.223 EAL: VFIO support initialized 00:04:37.223 EAL: Ask a virtual area of 0x2e000 bytes 00:04:37.223 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:37.223 EAL: Setting up physically contiguous memory... 00:04:37.223 EAL: Setting maximum number of open files to 524288 00:04:37.223 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:37.223 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:37.223 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:37.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.223 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:37.223 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.223 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.223 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:37.223 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:37.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.223 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:37.223 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.223 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.223 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:37.223 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:37.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.223 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:37.223 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.223 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.223 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:37.223 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:37.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.223 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:37.223 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.223 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.223 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:37.223 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:37.223 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:37.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.223 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:37.223 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:37.223 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.223 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:37.223 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:37.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.223 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:37.223 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:37.223 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.223 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:37.223 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:37.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.223 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:37.223 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:37.223 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.223 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:37.223 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:37.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.223 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:37.223 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:37.223 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.223 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:37.223 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:37.223 EAL: Hugepages will be freed exactly as allocated. 00:04:37.223 EAL: No shared files mode enabled, IPC is disabled 00:04:37.223 EAL: No shared files mode enabled, IPC is disabled 00:04:37.223 EAL: TSC frequency is ~2100000 KHz 00:04:37.223 EAL: Main lcore 0 is ready (tid=7f015022ca00;cpuset=[0]) 00:04:37.223 EAL: Trying to obtain current memory policy. 00:04:37.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.223 EAL: Restoring previous memory policy: 0 00:04:37.223 EAL: request: mp_malloc_sync 00:04:37.223 EAL: No shared files mode enabled, IPC is disabled 00:04:37.223 EAL: Heap on socket 0 was expanded by 2MB 00:04:37.223 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:37.484 EAL: Mem event callback 'spdk:(nil)' registered 00:04:37.484 00:04:37.484 00:04:37.484 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.484 http://cunit.sourceforge.net/ 00:04:37.484 00:04:37.484 00:04:37.484 Suite: components_suite 00:04:37.484 Test: vtophys_malloc_test ...passed 00:04:37.484 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:37.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.484 EAL: Restoring previous memory policy: 4 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was expanded by 4MB 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was shrunk by 4MB 00:04:37.484 EAL: Trying to obtain current memory policy. 00:04:37.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.484 EAL: Restoring previous memory policy: 4 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was expanded by 6MB 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was shrunk by 6MB 00:04:37.484 EAL: Trying to obtain current memory policy. 00:04:37.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.484 EAL: Restoring previous memory policy: 4 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was expanded by 10MB 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was shrunk by 10MB 00:04:37.484 EAL: Trying to obtain current memory policy. 00:04:37.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.484 EAL: Restoring previous memory policy: 4 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was expanded by 18MB 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was shrunk by 18MB 00:04:37.484 EAL: Trying to obtain current memory policy. 00:04:37.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.484 EAL: Restoring previous memory policy: 4 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was expanded by 34MB 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was shrunk by 34MB 00:04:37.484 EAL: Trying to obtain current memory policy. 00:04:37.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.484 EAL: Restoring previous memory policy: 4 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was expanded by 66MB 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was shrunk by 66MB 00:04:37.484 EAL: Trying to obtain current memory policy. 00:04:37.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.484 EAL: Restoring previous memory policy: 4 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was expanded by 130MB 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was shrunk by 130MB 00:04:37.484 EAL: Trying to obtain current memory policy. 00:04:37.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.484 EAL: Restoring previous memory policy: 4 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was expanded by 258MB 00:04:37.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.484 EAL: request: mp_malloc_sync 00:04:37.484 EAL: No shared files mode enabled, IPC is disabled 00:04:37.484 EAL: Heap on socket 0 was shrunk by 258MB 00:04:37.484 EAL: Trying to obtain current memory policy. 00:04:37.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.744 EAL: Restoring previous memory policy: 4 00:04:37.744 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.744 EAL: request: mp_malloc_sync 00:04:37.744 EAL: No shared files mode enabled, IPC is disabled 00:04:37.744 EAL: Heap on socket 0 was expanded by 514MB 00:04:37.744 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.003 EAL: request: mp_malloc_sync 00:04:38.003 EAL: No shared files mode enabled, IPC is disabled 00:04:38.003 EAL: Heap on socket 0 was shrunk by 514MB 00:04:38.003 EAL: Trying to obtain current memory policy. 00:04:38.003 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.003 EAL: Restoring previous memory policy: 4 00:04:38.003 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.003 EAL: request: mp_malloc_sync 00:04:38.003 EAL: No shared files mode enabled, IPC is disabled 00:04:38.003 EAL: Heap on socket 0 was expanded by 1026MB 00:04:38.262 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.262 EAL: request: mp_malloc_sync 00:04:38.262 EAL: No shared files mode enabled, IPC is disabled 00:04:38.262 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:38.262 passed 00:04:38.262 00:04:38.262 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.262 suites 1 1 n/a 0 0 00:04:38.262 tests 2 2 2 0 0 00:04:38.262 asserts 497 497 497 0 n/a 00:04:38.262 00:04:38.262 Elapsed time = 0.974 seconds 00:04:38.262 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.521 EAL: request: mp_malloc_sync 00:04:38.521 EAL: No shared files mode enabled, IPC is disabled 00:04:38.521 EAL: Heap on socket 0 was shrunk by 2MB 00:04:38.521 EAL: No shared files mode enabled, IPC is disabled 00:04:38.521 EAL: No shared files mode enabled, IPC is disabled 00:04:38.521 EAL: No shared files mode enabled, IPC is disabled 00:04:38.521 00:04:38.521 real 0m1.101s 00:04:38.521 user 0m0.646s 00:04:38.521 sys 0m0.423s 00:04:38.521 15:38:48 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.521 15:38:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:38.521 ************************************ 00:04:38.521 END TEST env_vtophys 00:04:38.521 ************************************ 00:04:38.521 15:38:48 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:38.521 15:38:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.521 15:38:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.521 15:38:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.521 ************************************ 00:04:38.521 START TEST env_pci 00:04:38.521 ************************************ 00:04:38.521 15:38:48 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:38.521 00:04:38.521 00:04:38.522 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.522 http://cunit.sourceforge.net/ 00:04:38.522 00:04:38.522 00:04:38.522 Suite: pci 00:04:38.522 Test: pci_hook ...[2024-10-01 15:38:48.541174] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2235345 has claimed it 00:04:38.522 EAL: Cannot find device (10000:00:01.0) 00:04:38.522 EAL: Failed to attach device on primary process 00:04:38.522 passed 00:04:38.522 00:04:38.522 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.522 suites 1 1 n/a 0 0 00:04:38.522 tests 1 1 1 0 0 00:04:38.522 asserts 25 25 25 0 n/a 00:04:38.522 00:04:38.522 Elapsed time = 0.036 seconds 00:04:38.522 00:04:38.522 real 0m0.054s 00:04:38.522 user 0m0.019s 00:04:38.522 sys 0m0.035s 00:04:38.522 15:38:48 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.522 15:38:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:38.522 ************************************ 00:04:38.522 END TEST env_pci 00:04:38.522 ************************************ 00:04:38.522 15:38:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:38.522 15:38:48 env -- env/env.sh@15 -- # uname 00:04:38.522 15:38:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:38.522 15:38:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:38.522 15:38:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.522 15:38:48 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:38.522 15:38:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.522 15:38:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.522 ************************************ 00:04:38.522 START TEST env_dpdk_post_init 00:04:38.522 ************************************ 00:04:38.522 15:38:48 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.522 EAL: Detected CPU lcores: 96 00:04:38.522 EAL: Detected NUMA nodes: 2 00:04:38.522 EAL: Detected shared linkage of DPDK 00:04:38.522 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.522 EAL: Selected IOVA mode 'VA' 00:04:38.522 EAL: VFIO support initialized 00:04:38.522 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.782 EAL: Using IOMMU type 1 (Type 1) 00:04:38.782 EAL: Ignore mapping IO port bar(1) 00:04:38.782 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:38.782 EAL: Ignore mapping IO port bar(1) 00:04:38.782 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:38.782 EAL: Ignore mapping IO port bar(1) 00:04:38.782 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:38.782 EAL: Ignore mapping IO port bar(1) 00:04:38.782 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:38.782 EAL: Ignore mapping IO port bar(1) 00:04:38.782 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:38.782 EAL: Ignore mapping IO port bar(1) 00:04:38.782 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:38.782 EAL: Ignore mapping IO port bar(1) 00:04:38.782 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:38.782 EAL: Ignore mapping IO port bar(1) 00:04:38.782 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:39.718 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:39.718 EAL: Ignore mapping IO port bar(1) 00:04:39.718 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:39.718 EAL: Ignore mapping IO port bar(1) 00:04:39.718 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:39.718 EAL: Ignore mapping IO port bar(1) 00:04:39.718 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:39.718 EAL: Ignore mapping IO port bar(1) 00:04:39.718 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:39.718 EAL: Ignore mapping IO port bar(1) 00:04:39.718 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:39.718 EAL: Ignore mapping IO port bar(1) 00:04:39.718 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:39.718 EAL: Ignore mapping IO port bar(1) 00:04:39.718 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:39.718 EAL: Ignore mapping IO port bar(1) 00:04:39.718 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:43.908 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:43.908 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:43.908 Starting DPDK initialization... 00:04:43.908 Starting SPDK post initialization... 00:04:43.908 SPDK NVMe probe 00:04:43.908 Attaching to 0000:5e:00.0 00:04:43.908 Attached to 0000:5e:00.0 00:04:43.908 Cleaning up... 00:04:43.908 00:04:43.908 real 0m4.931s 00:04:43.908 user 0m3.533s 00:04:43.908 sys 0m0.472s 00:04:43.908 15:38:53 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.908 15:38:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.908 ************************************ 00:04:43.908 END TEST env_dpdk_post_init 00:04:43.908 ************************************ 00:04:43.908 15:38:53 env -- env/env.sh@26 -- # uname 00:04:43.908 15:38:53 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:43.908 15:38:53 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:43.908 15:38:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.908 15:38:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.908 15:38:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.908 ************************************ 00:04:43.908 START TEST env_mem_callbacks 00:04:43.908 ************************************ 00:04:43.908 15:38:53 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:43.908 EAL: Detected CPU lcores: 96 00:04:43.908 EAL: Detected NUMA nodes: 2 00:04:43.908 EAL: Detected shared linkage of DPDK 00:04:43.908 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.908 EAL: Selected IOVA mode 'VA' 00:04:43.908 EAL: VFIO support initialized 00:04:43.908 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.908 00:04:43.908 00:04:43.908 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.908 http://cunit.sourceforge.net/ 00:04:43.908 00:04:43.908 00:04:43.908 Suite: memory 00:04:43.908 Test: test ... 00:04:43.908 register 0x200000200000 2097152 00:04:43.908 malloc 3145728 00:04:43.908 register 0x200000400000 4194304 00:04:43.908 buf 0x200000500000 len 3145728 PASSED 00:04:43.908 malloc 64 00:04:43.908 buf 0x2000004fff40 len 64 PASSED 00:04:43.908 malloc 4194304 00:04:43.908 register 0x200000800000 6291456 00:04:43.908 buf 0x200000a00000 len 4194304 PASSED 00:04:43.908 free 0x200000500000 3145728 00:04:43.909 free 0x2000004fff40 64 00:04:43.909 unregister 0x200000400000 4194304 PASSED 00:04:43.909 free 0x200000a00000 4194304 00:04:43.909 unregister 0x200000800000 6291456 PASSED 00:04:43.909 malloc 8388608 00:04:43.909 register 0x200000400000 10485760 00:04:43.909 buf 0x200000600000 len 8388608 PASSED 00:04:43.909 free 0x200000600000 8388608 00:04:43.909 unregister 0x200000400000 10485760 PASSED 00:04:43.909 passed 00:04:43.909 00:04:43.909 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.909 suites 1 1 n/a 0 0 00:04:43.909 tests 1 1 1 0 0 00:04:43.909 asserts 15 15 15 0 n/a 00:04:43.909 00:04:43.909 Elapsed time = 0.008 seconds 00:04:43.909 00:04:43.909 real 0m0.058s 00:04:43.909 user 0m0.022s 00:04:43.909 sys 0m0.036s 00:04:43.909 15:38:53 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.909 15:38:53 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:43.909 ************************************ 00:04:43.909 END TEST env_mem_callbacks 00:04:43.909 ************************************ 00:04:43.909 00:04:43.909 real 0m6.816s 00:04:43.909 user 0m4.602s 00:04:43.909 sys 0m1.288s 00:04:43.909 15:38:53 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.909 15:38:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.909 ************************************ 00:04:43.909 END TEST env 00:04:43.909 ************************************ 00:04:43.909 15:38:53 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:43.909 15:38:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.909 15:38:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.909 15:38:53 -- common/autotest_common.sh@10 -- # set +x 00:04:43.909 ************************************ 00:04:43.909 START TEST rpc 00:04:43.909 ************************************ 00:04:43.909 15:38:53 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:43.909 * Looking for test storage... 00:04:43.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.909 15:38:53 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:43.909 15:38:53 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:43.909 15:38:53 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:43.909 15:38:53 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:43.909 15:38:53 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.909 15:38:53 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.909 15:38:53 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.909 15:38:53 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.909 15:38:53 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.909 15:38:53 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.909 15:38:53 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.909 15:38:53 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.909 15:38:53 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.909 15:38:53 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.909 15:38:53 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.909 15:38:53 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:43.909 15:38:53 rpc -- scripts/common.sh@345 -- # : 1 00:04:43.909 15:38:53 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.909 15:38:53 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.909 15:38:53 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:43.909 15:38:53 rpc -- scripts/common.sh@353 -- # local d=1 00:04:43.909 15:38:53 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.909 15:38:53 rpc -- scripts/common.sh@355 -- # echo 1 00:04:43.909 15:38:53 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.909 15:38:53 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:43.909 15:38:53 rpc -- scripts/common.sh@353 -- # local d=2 00:04:43.909 15:38:53 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.909 15:38:53 rpc -- scripts/common.sh@355 -- # echo 2 00:04:43.909 15:38:53 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.909 15:38:53 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.909 15:38:53 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.909 15:38:53 rpc -- scripts/common.sh@368 -- # return 0 00:04:43.909 15:38:53 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.909 15:38:53 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:43.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.909 --rc genhtml_branch_coverage=1 00:04:43.909 --rc genhtml_function_coverage=1 00:04:43.909 --rc genhtml_legend=1 00:04:43.909 --rc geninfo_all_blocks=1 00:04:43.909 --rc geninfo_unexecuted_blocks=1 00:04:43.909 00:04:43.909 ' 00:04:43.909 15:38:53 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:43.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.909 --rc genhtml_branch_coverage=1 00:04:43.909 --rc genhtml_function_coverage=1 00:04:43.909 --rc genhtml_legend=1 00:04:43.909 --rc geninfo_all_blocks=1 00:04:43.909 --rc geninfo_unexecuted_blocks=1 00:04:43.909 00:04:43.909 ' 00:04:43.909 15:38:53 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:43.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.909 --rc genhtml_branch_coverage=1 00:04:43.909 --rc genhtml_function_coverage=1 00:04:43.909 --rc genhtml_legend=1 00:04:43.909 --rc geninfo_all_blocks=1 00:04:43.909 --rc geninfo_unexecuted_blocks=1 00:04:43.909 00:04:43.909 ' 00:04:43.909 15:38:53 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:43.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.909 --rc genhtml_branch_coverage=1 00:04:43.909 --rc genhtml_function_coverage=1 00:04:43.909 --rc genhtml_legend=1 00:04:43.909 --rc geninfo_all_blocks=1 00:04:43.909 --rc geninfo_unexecuted_blocks=1 00:04:43.909 00:04:43.909 ' 00:04:43.909 15:38:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2236391 00:04:43.909 15:38:53 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:43.909 15:38:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.909 15:38:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2236391 00:04:43.909 15:38:53 rpc -- common/autotest_common.sh@831 -- # '[' -z 2236391 ']' 00:04:43.909 15:38:53 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.909 15:38:54 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.909 15:38:54 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.909 15:38:54 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.909 15:38:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.909 [2024-10-01 15:38:54.050183] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:04:43.909 [2024-10-01 15:38:54.050230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236391 ] 00:04:44.169 [2024-10-01 15:38:54.119586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.169 [2024-10-01 15:38:54.192939] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:44.169 [2024-10-01 15:38:54.192978] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2236391' to capture a snapshot of events at runtime. 00:04:44.169 [2024-10-01 15:38:54.192985] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:44.169 [2024-10-01 15:38:54.192991] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:44.169 [2024-10-01 15:38:54.192997] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2236391 for offline analysis/debug. 00:04:44.169 [2024-10-01 15:38:54.193016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.735 15:38:54 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.735 15:38:54 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:44.735 15:38:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:44.735 15:38:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:44.735 15:38:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:44.735 15:38:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:44.735 15:38:54 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.735 15:38:54 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.735 15:38:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.735 ************************************ 00:04:44.735 START TEST rpc_integrity 00:04:44.735 ************************************ 00:04:44.735 15:38:54 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:44.735 15:38:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.735 15:38:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.735 15:38:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.735 15:38:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.735 15:38:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.994 15:38:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.994 15:38:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.994 15:38:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.994 15:38:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.994 15:38:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.994 15:38:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.994 15:38:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:44.994 15:38:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.994 15:38:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.994 15:38:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.994 15:38:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.994 15:38:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.994 { 00:04:44.994 "name": "Malloc0", 00:04:44.994 "aliases": [ 00:04:44.994 "a51c7b1c-3aaf-4000-8c68-f514dbf80bd8" 00:04:44.994 ], 00:04:44.994 "product_name": "Malloc disk", 00:04:44.994 "block_size": 512, 00:04:44.994 "num_blocks": 16384, 00:04:44.994 "uuid": "a51c7b1c-3aaf-4000-8c68-f514dbf80bd8", 00:04:44.994 "assigned_rate_limits": { 00:04:44.994 "rw_ios_per_sec": 0, 00:04:44.994 "rw_mbytes_per_sec": 0, 00:04:44.994 "r_mbytes_per_sec": 0, 00:04:44.994 "w_mbytes_per_sec": 0 00:04:44.994 }, 00:04:44.994 "claimed": false, 00:04:44.994 "zoned": false, 00:04:44.994 "supported_io_types": { 00:04:44.994 "read": true, 00:04:44.994 "write": true, 00:04:44.994 "unmap": true, 00:04:44.994 "flush": true, 00:04:44.994 "reset": true, 00:04:44.994 "nvme_admin": false, 00:04:44.994 "nvme_io": false, 00:04:44.994 "nvme_io_md": false, 00:04:44.994 "write_zeroes": true, 00:04:44.994 "zcopy": true, 00:04:44.994 "get_zone_info": false, 00:04:44.994 "zone_management": false, 00:04:44.994 "zone_append": false, 00:04:44.994 "compare": false, 00:04:44.994 "compare_and_write": false, 00:04:44.994 "abort": true, 00:04:44.994 "seek_hole": false, 00:04:44.994 "seek_data": false, 00:04:44.994 "copy": true, 00:04:44.994 "nvme_iov_md": false 00:04:44.994 }, 00:04:44.994 "memory_domains": [ 00:04:44.994 { 00:04:44.994 "dma_device_id": "system", 00:04:44.994 "dma_device_type": 1 00:04:44.994 }, 00:04:44.994 { 00:04:44.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.994 "dma_device_type": 2 00:04:44.994 } 00:04:44.994 ], 00:04:44.994 "driver_specific": {} 00:04:44.994 } 00:04:44.994 ]' 00:04:44.994 15:38:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.994 15:38:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.994 15:38:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:44.994 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.994 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.994 [2024-10-01 15:38:55.044687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:44.994 [2024-10-01 15:38:55.044717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.994 [2024-10-01 15:38:55.044729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ff47c0 00:04:44.994 [2024-10-01 15:38:55.044736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.994 [2024-10-01 15:38:55.045818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.994 [2024-10-01 15:38:55.045839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.994 Passthru0 00:04:44.994 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.994 15:38:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.994 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.994 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.994 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.994 15:38:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.994 { 00:04:44.994 "name": "Malloc0", 00:04:44.994 "aliases": [ 00:04:44.994 "a51c7b1c-3aaf-4000-8c68-f514dbf80bd8" 00:04:44.994 ], 00:04:44.994 "product_name": "Malloc disk", 00:04:44.994 "block_size": 512, 00:04:44.994 "num_blocks": 16384, 00:04:44.994 "uuid": "a51c7b1c-3aaf-4000-8c68-f514dbf80bd8", 00:04:44.994 "assigned_rate_limits": { 00:04:44.994 "rw_ios_per_sec": 0, 00:04:44.994 "rw_mbytes_per_sec": 0, 00:04:44.994 "r_mbytes_per_sec": 0, 00:04:44.994 "w_mbytes_per_sec": 0 00:04:44.994 }, 00:04:44.994 "claimed": true, 00:04:44.994 "claim_type": "exclusive_write", 00:04:44.994 "zoned": false, 00:04:44.995 "supported_io_types": { 00:04:44.995 "read": true, 00:04:44.995 "write": true, 00:04:44.995 "unmap": true, 00:04:44.995 "flush": true, 00:04:44.995 "reset": true, 00:04:44.995 "nvme_admin": false, 00:04:44.995 "nvme_io": false, 00:04:44.995 "nvme_io_md": false, 00:04:44.995 "write_zeroes": true, 00:04:44.995 "zcopy": true, 00:04:44.995 "get_zone_info": false, 00:04:44.995 "zone_management": false, 00:04:44.995 "zone_append": false, 00:04:44.995 "compare": false, 00:04:44.995 "compare_and_write": false, 00:04:44.995 "abort": true, 00:04:44.995 "seek_hole": false, 00:04:44.995 "seek_data": false, 00:04:44.995 "copy": true, 00:04:44.995 "nvme_iov_md": false 00:04:44.995 }, 00:04:44.995 "memory_domains": [ 00:04:44.995 { 00:04:44.995 "dma_device_id": "system", 00:04:44.995 "dma_device_type": 1 00:04:44.995 }, 00:04:44.995 { 00:04:44.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.995 "dma_device_type": 2 00:04:44.995 } 00:04:44.995 ], 00:04:44.995 "driver_specific": {} 00:04:44.995 }, 00:04:44.995 { 00:04:44.995 "name": "Passthru0", 00:04:44.995 "aliases": [ 00:04:44.995 "922eb1b9-0c4f-514b-8da5-e46810f35b88" 00:04:44.995 ], 00:04:44.995 "product_name": "passthru", 00:04:44.995 "block_size": 512, 00:04:44.995 "num_blocks": 16384, 00:04:44.995 "uuid": "922eb1b9-0c4f-514b-8da5-e46810f35b88", 00:04:44.995 "assigned_rate_limits": { 00:04:44.995 "rw_ios_per_sec": 0, 00:04:44.995 "rw_mbytes_per_sec": 0, 00:04:44.995 "r_mbytes_per_sec": 0, 00:04:44.995 "w_mbytes_per_sec": 0 00:04:44.995 }, 00:04:44.995 "claimed": false, 00:04:44.995 "zoned": false, 00:04:44.995 "supported_io_types": { 00:04:44.995 "read": true, 00:04:44.995 "write": true, 00:04:44.995 "unmap": true, 00:04:44.995 "flush": true, 00:04:44.995 "reset": true, 00:04:44.995 "nvme_admin": false, 00:04:44.995 "nvme_io": false, 00:04:44.995 "nvme_io_md": false, 00:04:44.995 "write_zeroes": true, 00:04:44.995 "zcopy": true, 00:04:44.995 "get_zone_info": false, 00:04:44.995 "zone_management": false, 00:04:44.995 "zone_append": false, 00:04:44.995 "compare": false, 00:04:44.995 "compare_and_write": false, 00:04:44.995 "abort": true, 00:04:44.995 "seek_hole": false, 00:04:44.995 "seek_data": false, 00:04:44.995 "copy": true, 00:04:44.995 "nvme_iov_md": false 00:04:44.995 }, 00:04:44.995 "memory_domains": [ 00:04:44.995 { 00:04:44.995 "dma_device_id": "system", 00:04:44.995 "dma_device_type": 1 00:04:44.995 }, 00:04:44.995 { 00:04:44.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.995 "dma_device_type": 2 00:04:44.995 } 00:04:44.995 ], 00:04:44.995 "driver_specific": { 00:04:44.995 "passthru": { 00:04:44.995 "name": "Passthru0", 00:04:44.995 "base_bdev_name": "Malloc0" 00:04:44.995 } 00:04:44.995 } 00:04:44.995 } 00:04:44.995 ]' 00:04:44.995 15:38:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.995 15:38:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.995 15:38:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.995 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.995 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.995 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.995 15:38:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:44.995 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.995 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.995 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.995 15:38:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.995 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.995 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.995 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.995 15:38:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.995 15:38:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:45.253 15:38:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:45.253 00:04:45.253 real 0m0.275s 00:04:45.253 user 0m0.167s 00:04:45.253 sys 0m0.038s 00:04:45.253 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.253 15:38:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.253 ************************************ 00:04:45.253 END TEST rpc_integrity 00:04:45.253 ************************************ 00:04:45.253 15:38:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:45.253 15:38:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.253 15:38:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.253 15:38:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.253 ************************************ 00:04:45.253 START TEST rpc_plugins 00:04:45.253 ************************************ 00:04:45.253 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:45.253 15:38:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:45.253 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.253 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.253 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.253 15:38:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:45.253 15:38:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:45.253 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.253 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.253 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.253 15:38:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:45.253 { 00:04:45.253 "name": "Malloc1", 00:04:45.253 "aliases": [ 00:04:45.253 "0658b669-c8a6-44ab-b9a2-508188caa8b7" 00:04:45.253 ], 00:04:45.253 "product_name": "Malloc disk", 00:04:45.253 "block_size": 4096, 00:04:45.253 "num_blocks": 256, 00:04:45.253 "uuid": "0658b669-c8a6-44ab-b9a2-508188caa8b7", 00:04:45.253 "assigned_rate_limits": { 00:04:45.253 "rw_ios_per_sec": 0, 00:04:45.253 "rw_mbytes_per_sec": 0, 00:04:45.253 "r_mbytes_per_sec": 0, 00:04:45.254 "w_mbytes_per_sec": 0 00:04:45.254 }, 00:04:45.254 "claimed": false, 00:04:45.254 "zoned": false, 00:04:45.254 "supported_io_types": { 00:04:45.254 "read": true, 00:04:45.254 "write": true, 00:04:45.254 "unmap": true, 00:04:45.254 "flush": true, 00:04:45.254 "reset": true, 00:04:45.254 "nvme_admin": false, 00:04:45.254 "nvme_io": false, 00:04:45.254 "nvme_io_md": false, 00:04:45.254 "write_zeroes": true, 00:04:45.254 "zcopy": true, 00:04:45.254 "get_zone_info": false, 00:04:45.254 "zone_management": false, 00:04:45.254 "zone_append": false, 00:04:45.254 "compare": false, 00:04:45.254 "compare_and_write": false, 00:04:45.254 "abort": true, 00:04:45.254 "seek_hole": false, 00:04:45.254 "seek_data": false, 00:04:45.254 "copy": true, 00:04:45.254 "nvme_iov_md": false 00:04:45.254 }, 00:04:45.254 "memory_domains": [ 00:04:45.254 { 00:04:45.254 "dma_device_id": "system", 00:04:45.254 "dma_device_type": 1 00:04:45.254 }, 00:04:45.254 { 00:04:45.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.254 "dma_device_type": 2 00:04:45.254 } 00:04:45.254 ], 00:04:45.254 "driver_specific": {} 00:04:45.254 } 00:04:45.254 ]' 00:04:45.254 15:38:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:45.254 15:38:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:45.254 15:38:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:45.254 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.254 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.254 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.254 15:38:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:45.254 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.254 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.254 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.254 15:38:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:45.254 15:38:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:45.254 15:38:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:45.254 00:04:45.254 real 0m0.132s 00:04:45.254 user 0m0.077s 00:04:45.254 sys 0m0.020s 00:04:45.254 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.254 15:38:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.254 ************************************ 00:04:45.254 END TEST rpc_plugins 00:04:45.254 ************************************ 00:04:45.254 15:38:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:45.254 15:38:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.254 15:38:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.254 15:38:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.512 ************************************ 00:04:45.512 START TEST rpc_trace_cmd_test 00:04:45.512 ************************************ 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:45.512 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2236391", 00:04:45.512 "tpoint_group_mask": "0x8", 00:04:45.512 "iscsi_conn": { 00:04:45.512 "mask": "0x2", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 }, 00:04:45.512 "scsi": { 00:04:45.512 "mask": "0x4", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 }, 00:04:45.512 "bdev": { 00:04:45.512 "mask": "0x8", 00:04:45.512 "tpoint_mask": "0xffffffffffffffff" 00:04:45.512 }, 00:04:45.512 "nvmf_rdma": { 00:04:45.512 "mask": "0x10", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 }, 00:04:45.512 "nvmf_tcp": { 00:04:45.512 "mask": "0x20", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 }, 00:04:45.512 "ftl": { 00:04:45.512 "mask": "0x40", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 }, 00:04:45.512 "blobfs": { 00:04:45.512 "mask": "0x80", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 }, 00:04:45.512 "dsa": { 00:04:45.512 "mask": "0x200", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 }, 00:04:45.512 "thread": { 00:04:45.512 "mask": "0x400", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 }, 00:04:45.512 "nvme_pcie": { 00:04:45.512 "mask": "0x800", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 }, 00:04:45.512 "iaa": { 00:04:45.512 "mask": "0x1000", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 }, 00:04:45.512 "nvme_tcp": { 00:04:45.512 "mask": "0x2000", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 }, 00:04:45.512 "bdev_nvme": { 00:04:45.512 "mask": "0x4000", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 }, 00:04:45.512 "sock": { 00:04:45.512 "mask": "0x8000", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 }, 00:04:45.512 "blob": { 00:04:45.512 "mask": "0x10000", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 }, 00:04:45.512 "bdev_raid": { 00:04:45.512 "mask": "0x20000", 00:04:45.512 "tpoint_mask": "0x0" 00:04:45.512 } 00:04:45.512 }' 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:45.512 00:04:45.512 real 0m0.199s 00:04:45.512 user 0m0.167s 00:04:45.512 sys 0m0.020s 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.512 15:38:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.512 ************************************ 00:04:45.512 END TEST rpc_trace_cmd_test 00:04:45.512 ************************************ 00:04:45.512 15:38:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:45.512 15:38:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:45.512 15:38:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:45.512 15:38:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.512 15:38:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.512 15:38:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.770 ************************************ 00:04:45.770 START TEST rpc_daemon_integrity 00:04:45.770 ************************************ 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.770 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:45.771 { 00:04:45.771 "name": "Malloc2", 00:04:45.771 "aliases": [ 00:04:45.771 "a87047a6-457f-4f6a-a12e-e13956a71836" 00:04:45.771 ], 00:04:45.771 "product_name": "Malloc disk", 00:04:45.771 "block_size": 512, 00:04:45.771 "num_blocks": 16384, 00:04:45.771 "uuid": "a87047a6-457f-4f6a-a12e-e13956a71836", 00:04:45.771 "assigned_rate_limits": { 00:04:45.771 "rw_ios_per_sec": 0, 00:04:45.771 "rw_mbytes_per_sec": 0, 00:04:45.771 "r_mbytes_per_sec": 0, 00:04:45.771 "w_mbytes_per_sec": 0 00:04:45.771 }, 00:04:45.771 "claimed": false, 00:04:45.771 "zoned": false, 00:04:45.771 "supported_io_types": { 00:04:45.771 "read": true, 00:04:45.771 "write": true, 00:04:45.771 "unmap": true, 00:04:45.771 "flush": true, 00:04:45.771 "reset": true, 00:04:45.771 "nvme_admin": false, 00:04:45.771 "nvme_io": false, 00:04:45.771 "nvme_io_md": false, 00:04:45.771 "write_zeroes": true, 00:04:45.771 "zcopy": true, 00:04:45.771 "get_zone_info": false, 00:04:45.771 "zone_management": false, 00:04:45.771 "zone_append": false, 00:04:45.771 "compare": false, 00:04:45.771 "compare_and_write": false, 00:04:45.771 "abort": true, 00:04:45.771 "seek_hole": false, 00:04:45.771 "seek_data": false, 00:04:45.771 "copy": true, 00:04:45.771 "nvme_iov_md": false 00:04:45.771 }, 00:04:45.771 "memory_domains": [ 00:04:45.771 { 00:04:45.771 "dma_device_id": "system", 00:04:45.771 "dma_device_type": 1 00:04:45.771 }, 00:04:45.771 { 00:04:45.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.771 "dma_device_type": 2 00:04:45.771 } 00:04:45.771 ], 00:04:45.771 "driver_specific": {} 00:04:45.771 } 00:04:45.771 ]' 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.771 [2024-10-01 15:38:55.854894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:45.771 [2024-10-01 15:38:55.854926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.771 [2024-10-01 15:38:55.854938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ff7660 00:04:45.771 [2024-10-01 15:38:55.854945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.771 [2024-10-01 15:38:55.856017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.771 [2024-10-01 15:38:55.856039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:45.771 Passthru0 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:45.771 { 00:04:45.771 "name": "Malloc2", 00:04:45.771 "aliases": [ 00:04:45.771 "a87047a6-457f-4f6a-a12e-e13956a71836" 00:04:45.771 ], 00:04:45.771 "product_name": "Malloc disk", 00:04:45.771 "block_size": 512, 00:04:45.771 "num_blocks": 16384, 00:04:45.771 "uuid": "a87047a6-457f-4f6a-a12e-e13956a71836", 00:04:45.771 "assigned_rate_limits": { 00:04:45.771 "rw_ios_per_sec": 0, 00:04:45.771 "rw_mbytes_per_sec": 0, 00:04:45.771 "r_mbytes_per_sec": 0, 00:04:45.771 "w_mbytes_per_sec": 0 00:04:45.771 }, 00:04:45.771 "claimed": true, 00:04:45.771 "claim_type": "exclusive_write", 00:04:45.771 "zoned": false, 00:04:45.771 "supported_io_types": { 00:04:45.771 "read": true, 00:04:45.771 "write": true, 00:04:45.771 "unmap": true, 00:04:45.771 "flush": true, 00:04:45.771 "reset": true, 00:04:45.771 "nvme_admin": false, 00:04:45.771 "nvme_io": false, 00:04:45.771 "nvme_io_md": false, 00:04:45.771 "write_zeroes": true, 00:04:45.771 "zcopy": true, 00:04:45.771 "get_zone_info": false, 00:04:45.771 "zone_management": false, 00:04:45.771 "zone_append": false, 00:04:45.771 "compare": false, 00:04:45.771 "compare_and_write": false, 00:04:45.771 "abort": true, 00:04:45.771 "seek_hole": false, 00:04:45.771 "seek_data": false, 00:04:45.771 "copy": true, 00:04:45.771 "nvme_iov_md": false 00:04:45.771 }, 00:04:45.771 "memory_domains": [ 00:04:45.771 { 00:04:45.771 "dma_device_id": "system", 00:04:45.771 "dma_device_type": 1 00:04:45.771 }, 00:04:45.771 { 00:04:45.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.771 "dma_device_type": 2 00:04:45.771 } 00:04:45.771 ], 00:04:45.771 "driver_specific": {} 00:04:45.771 }, 00:04:45.771 { 00:04:45.771 "name": "Passthru0", 00:04:45.771 "aliases": [ 00:04:45.771 "ab9a9dcb-219d-5e1f-aa2e-5d9cdaa7b074" 00:04:45.771 ], 00:04:45.771 "product_name": "passthru", 00:04:45.771 "block_size": 512, 00:04:45.771 "num_blocks": 16384, 00:04:45.771 "uuid": "ab9a9dcb-219d-5e1f-aa2e-5d9cdaa7b074", 00:04:45.771 "assigned_rate_limits": { 00:04:45.771 "rw_ios_per_sec": 0, 00:04:45.771 "rw_mbytes_per_sec": 0, 00:04:45.771 "r_mbytes_per_sec": 0, 00:04:45.771 "w_mbytes_per_sec": 0 00:04:45.771 }, 00:04:45.771 "claimed": false, 00:04:45.771 "zoned": false, 00:04:45.771 "supported_io_types": { 00:04:45.771 "read": true, 00:04:45.771 "write": true, 00:04:45.771 "unmap": true, 00:04:45.771 "flush": true, 00:04:45.771 "reset": true, 00:04:45.771 "nvme_admin": false, 00:04:45.771 "nvme_io": false, 00:04:45.771 "nvme_io_md": false, 00:04:45.771 "write_zeroes": true, 00:04:45.771 "zcopy": true, 00:04:45.771 "get_zone_info": false, 00:04:45.771 "zone_management": false, 00:04:45.771 "zone_append": false, 00:04:45.771 "compare": false, 00:04:45.771 "compare_and_write": false, 00:04:45.771 "abort": true, 00:04:45.771 "seek_hole": false, 00:04:45.771 "seek_data": false, 00:04:45.771 "copy": true, 00:04:45.771 "nvme_iov_md": false 00:04:45.771 }, 00:04:45.771 "memory_domains": [ 00:04:45.771 { 00:04:45.771 "dma_device_id": "system", 00:04:45.771 "dma_device_type": 1 00:04:45.771 }, 00:04:45.771 { 00:04:45.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.771 "dma_device_type": 2 00:04:45.771 } 00:04:45.771 ], 00:04:45.771 "driver_specific": { 00:04:45.771 "passthru": { 00:04:45.771 "name": "Passthru0", 00:04:45.771 "base_bdev_name": "Malloc2" 00:04:45.771 } 00:04:45.771 } 00:04:45.771 } 00:04:45.771 ]' 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:45.771 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.030 15:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.030 00:04:46.030 real 0m0.275s 00:04:46.030 user 0m0.166s 00:04:46.030 sys 0m0.050s 00:04:46.030 15:38:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.030 15:38:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.030 ************************************ 00:04:46.030 END TEST rpc_daemon_integrity 00:04:46.030 ************************************ 00:04:46.030 15:38:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:46.030 15:38:56 rpc -- rpc/rpc.sh@84 -- # killprocess 2236391 00:04:46.030 15:38:56 rpc -- common/autotest_common.sh@950 -- # '[' -z 2236391 ']' 00:04:46.030 15:38:56 rpc -- common/autotest_common.sh@954 -- # kill -0 2236391 00:04:46.030 15:38:56 rpc -- common/autotest_common.sh@955 -- # uname 00:04:46.030 15:38:56 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.030 15:38:56 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2236391 00:04:46.030 15:38:56 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.030 15:38:56 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.030 15:38:56 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2236391' 00:04:46.030 killing process with pid 2236391 00:04:46.030 15:38:56 rpc -- common/autotest_common.sh@969 -- # kill 2236391 00:04:46.030 15:38:56 rpc -- common/autotest_common.sh@974 -- # wait 2236391 00:04:46.289 00:04:46.289 real 0m2.592s 00:04:46.289 user 0m3.268s 00:04:46.289 sys 0m0.740s 00:04:46.289 15:38:56 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.289 15:38:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.289 ************************************ 00:04:46.289 END TEST rpc 00:04:46.289 ************************************ 00:04:46.289 15:38:56 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:46.289 15:38:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.289 15:38:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.289 15:38:56 -- common/autotest_common.sh@10 -- # set +x 00:04:46.548 ************************************ 00:04:46.548 START TEST skip_rpc 00:04:46.548 ************************************ 00:04:46.548 15:38:56 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:46.548 * Looking for test storage... 00:04:46.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:46.548 15:38:56 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:46.548 15:38:56 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:46.549 15:38:56 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:46.549 15:38:56 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.549 15:38:56 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:46.549 15:38:56 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.549 15:38:56 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:46.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.549 --rc genhtml_branch_coverage=1 00:04:46.549 --rc genhtml_function_coverage=1 00:04:46.549 --rc genhtml_legend=1 00:04:46.549 --rc geninfo_all_blocks=1 00:04:46.549 --rc geninfo_unexecuted_blocks=1 00:04:46.549 00:04:46.549 ' 00:04:46.549 15:38:56 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:46.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.549 --rc genhtml_branch_coverage=1 00:04:46.549 --rc genhtml_function_coverage=1 00:04:46.549 --rc genhtml_legend=1 00:04:46.549 --rc geninfo_all_blocks=1 00:04:46.549 --rc geninfo_unexecuted_blocks=1 00:04:46.549 00:04:46.549 ' 00:04:46.549 15:38:56 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:46.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.549 --rc genhtml_branch_coverage=1 00:04:46.549 --rc genhtml_function_coverage=1 00:04:46.549 --rc genhtml_legend=1 00:04:46.549 --rc geninfo_all_blocks=1 00:04:46.549 --rc geninfo_unexecuted_blocks=1 00:04:46.549 00:04:46.549 ' 00:04:46.549 15:38:56 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:46.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.549 --rc genhtml_branch_coverage=1 00:04:46.549 --rc genhtml_function_coverage=1 00:04:46.549 --rc genhtml_legend=1 00:04:46.549 --rc geninfo_all_blocks=1 00:04:46.549 --rc geninfo_unexecuted_blocks=1 00:04:46.549 00:04:46.549 ' 00:04:46.549 15:38:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:46.549 15:38:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:46.549 15:38:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:46.549 15:38:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.549 15:38:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.549 15:38:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.549 ************************************ 00:04:46.549 START TEST skip_rpc 00:04:46.549 ************************************ 00:04:46.549 15:38:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:46.549 15:38:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2237042 00:04:46.549 15:38:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.549 15:38:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:46.549 15:38:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:46.809 [2024-10-01 15:38:56.746904] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:04:46.809 [2024-10-01 15:38:56.746941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237042 ] 00:04:46.809 [2024-10-01 15:38:56.811513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.809 [2024-10-01 15:38:56.882148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.068 15:39:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:52.068 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2237042 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2237042 ']' 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2237042 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2237042 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2237042' 00:04:52.069 killing process with pid 2237042 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2237042 00:04:52.069 15:39:01 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2237042 00:04:52.069 00:04:52.069 real 0m5.391s 00:04:52.069 user 0m5.131s 00:04:52.069 sys 0m0.290s 00:04:52.069 15:39:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.069 15:39:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.069 ************************************ 00:04:52.069 END TEST skip_rpc 00:04:52.069 ************************************ 00:04:52.069 15:39:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:52.069 15:39:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.069 15:39:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.069 15:39:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.069 ************************************ 00:04:52.069 START TEST skip_rpc_with_json 00:04:52.069 ************************************ 00:04:52.069 15:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:52.069 15:39:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:52.069 15:39:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2238113 00:04:52.069 15:39:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.069 15:39:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.069 15:39:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2238113 00:04:52.069 15:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2238113 ']' 00:04:52.069 15:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.069 15:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.069 15:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.069 15:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.069 15:39:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.069 [2024-10-01 15:39:02.202860] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:04:52.069 [2024-10-01 15:39:02.202908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2238113 ] 00:04:52.069 [2024-10-01 15:39:02.252683] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.326 [2024-10-01 15:39:02.331753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.890 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.890 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:52.890 15:39:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:52.890 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.890 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.890 [2024-10-01 15:39:03.030277] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:52.890 request: 00:04:52.890 { 00:04:52.890 "trtype": "tcp", 00:04:52.890 "method": "nvmf_get_transports", 00:04:52.890 "req_id": 1 00:04:52.890 } 00:04:52.890 Got JSON-RPC error response 00:04:52.890 response: 00:04:52.890 { 00:04:52.890 "code": -19, 00:04:52.890 "message": "No such device" 00:04:52.890 } 00:04:52.890 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:52.890 15:39:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:52.890 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.890 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.890 [2024-10-01 15:39:03.038367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:52.890 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.891 15:39:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:52.891 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.891 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.149 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.149 15:39:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.149 { 00:04:53.149 "subsystems": [ 00:04:53.149 { 00:04:53.149 "subsystem": "fsdev", 00:04:53.149 "config": [ 00:04:53.149 { 00:04:53.149 "method": "fsdev_set_opts", 00:04:53.149 "params": { 00:04:53.149 "fsdev_io_pool_size": 65535, 00:04:53.149 "fsdev_io_cache_size": 256 00:04:53.149 } 00:04:53.149 } 00:04:53.149 ] 00:04:53.149 }, 00:04:53.149 { 00:04:53.149 "subsystem": "vfio_user_target", 00:04:53.149 "config": null 00:04:53.149 }, 00:04:53.149 { 00:04:53.149 "subsystem": "keyring", 00:04:53.149 "config": [] 00:04:53.149 }, 00:04:53.149 { 00:04:53.149 "subsystem": "iobuf", 00:04:53.149 "config": [ 00:04:53.149 { 00:04:53.149 "method": "iobuf_set_options", 00:04:53.149 "params": { 00:04:53.149 "small_pool_count": 8192, 00:04:53.149 "large_pool_count": 1024, 00:04:53.149 "small_bufsize": 8192, 00:04:53.149 "large_bufsize": 135168 00:04:53.149 } 00:04:53.149 } 00:04:53.149 ] 00:04:53.149 }, 00:04:53.149 { 00:04:53.149 "subsystem": "sock", 00:04:53.149 "config": [ 00:04:53.149 { 00:04:53.149 "method": "sock_set_default_impl", 00:04:53.149 "params": { 00:04:53.149 "impl_name": "posix" 00:04:53.149 } 00:04:53.149 }, 00:04:53.149 { 00:04:53.149 "method": "sock_impl_set_options", 00:04:53.149 "params": { 00:04:53.149 "impl_name": "ssl", 00:04:53.149 "recv_buf_size": 4096, 00:04:53.149 "send_buf_size": 4096, 00:04:53.149 "enable_recv_pipe": true, 00:04:53.149 "enable_quickack": false, 00:04:53.149 "enable_placement_id": 0, 00:04:53.149 "enable_zerocopy_send_server": true, 00:04:53.149 "enable_zerocopy_send_client": false, 00:04:53.149 "zerocopy_threshold": 0, 00:04:53.149 "tls_version": 0, 00:04:53.149 "enable_ktls": false 00:04:53.149 } 00:04:53.149 }, 00:04:53.149 { 00:04:53.149 "method": "sock_impl_set_options", 00:04:53.149 "params": { 00:04:53.149 "impl_name": "posix", 00:04:53.149 "recv_buf_size": 2097152, 00:04:53.149 "send_buf_size": 2097152, 00:04:53.149 "enable_recv_pipe": true, 00:04:53.149 "enable_quickack": false, 00:04:53.149 "enable_placement_id": 0, 00:04:53.149 "enable_zerocopy_send_server": true, 00:04:53.149 "enable_zerocopy_send_client": false, 00:04:53.149 "zerocopy_threshold": 0, 00:04:53.149 "tls_version": 0, 00:04:53.149 "enable_ktls": false 00:04:53.149 } 00:04:53.149 } 00:04:53.149 ] 00:04:53.149 }, 00:04:53.149 { 00:04:53.149 "subsystem": "vmd", 00:04:53.149 "config": [] 00:04:53.149 }, 00:04:53.149 { 00:04:53.149 "subsystem": "accel", 00:04:53.149 "config": [ 00:04:53.149 { 00:04:53.149 "method": "accel_set_options", 00:04:53.149 "params": { 00:04:53.149 "small_cache_size": 128, 00:04:53.149 "large_cache_size": 16, 00:04:53.149 "task_count": 2048, 00:04:53.149 "sequence_count": 2048, 00:04:53.149 "buf_count": 2048 00:04:53.149 } 00:04:53.149 } 00:04:53.149 ] 00:04:53.149 }, 00:04:53.149 { 00:04:53.149 "subsystem": "bdev", 00:04:53.149 "config": [ 00:04:53.149 { 00:04:53.149 "method": "bdev_set_options", 00:04:53.149 "params": { 00:04:53.149 "bdev_io_pool_size": 65535, 00:04:53.149 "bdev_io_cache_size": 256, 00:04:53.149 "bdev_auto_examine": true, 00:04:53.149 "iobuf_small_cache_size": 128, 00:04:53.149 "iobuf_large_cache_size": 16 00:04:53.149 } 00:04:53.149 }, 00:04:53.149 { 00:04:53.149 "method": "bdev_raid_set_options", 00:04:53.149 "params": { 00:04:53.149 "process_window_size_kb": 1024, 00:04:53.149 "process_max_bandwidth_mb_sec": 0 00:04:53.149 } 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "method": "bdev_iscsi_set_options", 00:04:53.150 "params": { 00:04:53.150 "timeout_sec": 30 00:04:53.150 } 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "method": "bdev_nvme_set_options", 00:04:53.150 "params": { 00:04:53.150 "action_on_timeout": "none", 00:04:53.150 "timeout_us": 0, 00:04:53.150 "timeout_admin_us": 0, 00:04:53.150 "keep_alive_timeout_ms": 10000, 00:04:53.150 "arbitration_burst": 0, 00:04:53.150 "low_priority_weight": 0, 00:04:53.150 "medium_priority_weight": 0, 00:04:53.150 "high_priority_weight": 0, 00:04:53.150 "nvme_adminq_poll_period_us": 10000, 00:04:53.150 "nvme_ioq_poll_period_us": 0, 00:04:53.150 "io_queue_requests": 0, 00:04:53.150 "delay_cmd_submit": true, 00:04:53.150 "transport_retry_count": 4, 00:04:53.150 "bdev_retry_count": 3, 00:04:53.150 "transport_ack_timeout": 0, 00:04:53.150 "ctrlr_loss_timeout_sec": 0, 00:04:53.150 "reconnect_delay_sec": 0, 00:04:53.150 "fast_io_fail_timeout_sec": 0, 00:04:53.150 "disable_auto_failback": false, 00:04:53.150 "generate_uuids": false, 00:04:53.150 "transport_tos": 0, 00:04:53.150 "nvme_error_stat": false, 00:04:53.150 "rdma_srq_size": 0, 00:04:53.150 "io_path_stat": false, 00:04:53.150 "allow_accel_sequence": false, 00:04:53.150 "rdma_max_cq_size": 0, 00:04:53.150 "rdma_cm_event_timeout_ms": 0, 00:04:53.150 "dhchap_digests": [ 00:04:53.150 "sha256", 00:04:53.150 "sha384", 00:04:53.150 "sha512" 00:04:53.150 ], 00:04:53.150 "dhchap_dhgroups": [ 00:04:53.150 "null", 00:04:53.150 "ffdhe2048", 00:04:53.150 "ffdhe3072", 00:04:53.150 "ffdhe4096", 00:04:53.150 "ffdhe6144", 00:04:53.150 "ffdhe8192" 00:04:53.150 ] 00:04:53.150 } 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "method": "bdev_nvme_set_hotplug", 00:04:53.150 "params": { 00:04:53.150 "period_us": 100000, 00:04:53.150 "enable": false 00:04:53.150 } 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "method": "bdev_wait_for_examine" 00:04:53.150 } 00:04:53.150 ] 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "subsystem": "scsi", 00:04:53.150 "config": null 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "subsystem": "scheduler", 00:04:53.150 "config": [ 00:04:53.150 { 00:04:53.150 "method": "framework_set_scheduler", 00:04:53.150 "params": { 00:04:53.150 "name": "static" 00:04:53.150 } 00:04:53.150 } 00:04:53.150 ] 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "subsystem": "vhost_scsi", 00:04:53.150 "config": [] 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "subsystem": "vhost_blk", 00:04:53.150 "config": [] 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "subsystem": "ublk", 00:04:53.150 "config": [] 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "subsystem": "nbd", 00:04:53.150 "config": [] 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "subsystem": "nvmf", 00:04:53.150 "config": [ 00:04:53.150 { 00:04:53.150 "method": "nvmf_set_config", 00:04:53.150 "params": { 00:04:53.150 "discovery_filter": "match_any", 00:04:53.150 "admin_cmd_passthru": { 00:04:53.150 "identify_ctrlr": false 00:04:53.150 }, 00:04:53.150 "dhchap_digests": [ 00:04:53.150 "sha256", 00:04:53.150 "sha384", 00:04:53.150 "sha512" 00:04:53.150 ], 00:04:53.150 "dhchap_dhgroups": [ 00:04:53.150 "null", 00:04:53.150 "ffdhe2048", 00:04:53.150 "ffdhe3072", 00:04:53.150 "ffdhe4096", 00:04:53.150 "ffdhe6144", 00:04:53.150 "ffdhe8192" 00:04:53.150 ] 00:04:53.150 } 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "method": "nvmf_set_max_subsystems", 00:04:53.150 "params": { 00:04:53.150 "max_subsystems": 1024 00:04:53.150 } 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "method": "nvmf_set_crdt", 00:04:53.150 "params": { 00:04:53.150 "crdt1": 0, 00:04:53.150 "crdt2": 0, 00:04:53.150 "crdt3": 0 00:04:53.150 } 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "method": "nvmf_create_transport", 00:04:53.150 "params": { 00:04:53.150 "trtype": "TCP", 00:04:53.150 "max_queue_depth": 128, 00:04:53.150 "max_io_qpairs_per_ctrlr": 127, 00:04:53.150 "in_capsule_data_size": 4096, 00:04:53.150 "max_io_size": 131072, 00:04:53.150 "io_unit_size": 131072, 00:04:53.150 "max_aq_depth": 128, 00:04:53.150 "num_shared_buffers": 511, 00:04:53.150 "buf_cache_size": 4294967295, 00:04:53.150 "dif_insert_or_strip": false, 00:04:53.150 "zcopy": false, 00:04:53.150 "c2h_success": true, 00:04:53.150 "sock_priority": 0, 00:04:53.150 "abort_timeout_sec": 1, 00:04:53.150 "ack_timeout": 0, 00:04:53.150 "data_wr_pool_size": 0 00:04:53.150 } 00:04:53.150 } 00:04:53.150 ] 00:04:53.150 }, 00:04:53.150 { 00:04:53.150 "subsystem": "iscsi", 00:04:53.150 "config": [ 00:04:53.150 { 00:04:53.150 "method": "iscsi_set_options", 00:04:53.150 "params": { 00:04:53.150 "node_base": "iqn.2016-06.io.spdk", 00:04:53.150 "max_sessions": 128, 00:04:53.150 "max_connections_per_session": 2, 00:04:53.150 "max_queue_depth": 64, 00:04:53.150 "default_time2wait": 2, 00:04:53.150 "default_time2retain": 20, 00:04:53.150 "first_burst_length": 8192, 00:04:53.150 "immediate_data": true, 00:04:53.150 "allow_duplicated_isid": false, 00:04:53.150 "error_recovery_level": 0, 00:04:53.150 "nop_timeout": 60, 00:04:53.150 "nop_in_interval": 30, 00:04:53.150 "disable_chap": false, 00:04:53.150 "require_chap": false, 00:04:53.150 "mutual_chap": false, 00:04:53.150 "chap_group": 0, 00:04:53.150 "max_large_datain_per_connection": 64, 00:04:53.150 "max_r2t_per_connection": 4, 00:04:53.150 "pdu_pool_size": 36864, 00:04:53.150 "immediate_data_pool_size": 16384, 00:04:53.150 "data_out_pool_size": 2048 00:04:53.150 } 00:04:53.150 } 00:04:53.150 ] 00:04:53.150 } 00:04:53.150 ] 00:04:53.150 } 00:04:53.150 15:39:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:53.150 15:39:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2238113 00:04:53.150 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2238113 ']' 00:04:53.150 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2238113 00:04:53.150 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:53.150 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.150 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2238113 00:04:53.150 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.150 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.150 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2238113' 00:04:53.150 killing process with pid 2238113 00:04:53.150 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2238113 00:04:53.150 15:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2238113 00:04:53.408 15:39:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2238357 00:04:53.408 15:39:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.408 15:39:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:58.666 15:39:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2238357 00:04:58.666 15:39:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2238357 ']' 00:04:58.666 15:39:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2238357 00:04:58.666 15:39:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:58.666 15:39:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.666 15:39:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2238357 00:04:58.666 15:39:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.666 15:39:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.666 15:39:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2238357' 00:04:58.666 killing process with pid 2238357 00:04:58.666 15:39:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2238357 00:04:58.666 15:39:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2238357 00:04:58.926 15:39:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:58.926 15:39:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:58.926 00:04:58.926 real 0m6.829s 00:04:58.926 user 0m6.654s 00:04:58.926 sys 0m0.635s 00:04:58.926 15:39:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.926 15:39:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.926 ************************************ 00:04:58.926 END TEST skip_rpc_with_json 00:04:58.926 ************************************ 00:04:58.926 15:39:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:58.926 15:39:09 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.926 15:39:09 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.926 15:39:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.926 ************************************ 00:04:58.926 START TEST skip_rpc_with_delay 00:04:58.926 ************************************ 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.927 [2024-10-01 15:39:09.096877] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:58.927 [2024-10-01 15:39:09.096937] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:58.927 00:04:58.927 real 0m0.063s 00:04:58.927 user 0m0.032s 00:04:58.927 sys 0m0.030s 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.927 15:39:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:58.927 ************************************ 00:04:58.927 END TEST skip_rpc_with_delay 00:04:58.927 ************************************ 00:04:59.185 15:39:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:59.185 15:39:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:59.185 15:39:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:59.185 15:39:09 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.185 15:39:09 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.185 15:39:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.185 ************************************ 00:04:59.185 START TEST exit_on_failed_rpc_init 00:04:59.185 ************************************ 00:04:59.185 15:39:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:59.185 15:39:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2239587 00:04:59.185 15:39:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.185 15:39:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2239587 00:04:59.185 15:39:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2239587 ']' 00:04:59.185 15:39:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.185 15:39:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.186 15:39:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.186 15:39:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.186 15:39:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.186 [2024-10-01 15:39:09.230879] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:04:59.186 [2024-10-01 15:39:09.230923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2239587 ] 00:04:59.186 [2024-10-01 15:39:09.300986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.445 [2024-10-01 15:39:09.380753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:00.011 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.011 [2024-10-01 15:39:10.120928] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:00.011 [2024-10-01 15:39:10.120978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2239936 ] 00:05:00.011 [2024-10-01 15:39:10.188347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.269 [2024-10-01 15:39:10.261358] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.269 [2024-10-01 15:39:10.261420] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:00.269 [2024-10-01 15:39:10.261429] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:00.269 [2024-10-01 15:39:10.261435] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2239587 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2239587 ']' 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2239587 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2239587 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2239587' 00:05:00.269 killing process with pid 2239587 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2239587 00:05:00.269 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2239587 00:05:00.527 00:05:00.527 real 0m1.533s 00:05:00.527 user 0m1.766s 00:05:00.527 sys 0m0.444s 00:05:00.527 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.527 15:39:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.527 ************************************ 00:05:00.527 END TEST exit_on_failed_rpc_init 00:05:00.527 ************************************ 00:05:00.785 15:39:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:00.785 00:05:00.785 real 0m14.268s 00:05:00.785 user 0m13.778s 00:05:00.785 sys 0m1.690s 00:05:00.785 15:39:10 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.785 15:39:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.785 ************************************ 00:05:00.785 END TEST skip_rpc 00:05:00.785 ************************************ 00:05:00.785 15:39:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:00.785 15:39:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.785 15:39:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.785 15:39:10 -- common/autotest_common.sh@10 -- # set +x 00:05:00.785 ************************************ 00:05:00.785 START TEST rpc_client 00:05:00.785 ************************************ 00:05:00.785 15:39:10 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:00.785 * Looking for test storage... 00:05:00.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:00.785 15:39:10 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:00.785 15:39:10 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:00.785 15:39:10 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:01.044 15:39:10 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.044 15:39:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.045 15:39:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:01.045 15:39:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:01.045 15:39:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.045 15:39:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:01.045 15:39:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.045 15:39:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:01.045 15:39:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:01.045 15:39:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.045 15:39:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:01.045 15:39:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.045 15:39:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.045 15:39:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.045 15:39:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:01.045 15:39:10 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.045 15:39:10 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:01.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.045 --rc genhtml_branch_coverage=1 00:05:01.045 --rc genhtml_function_coverage=1 00:05:01.045 --rc genhtml_legend=1 00:05:01.045 --rc geninfo_all_blocks=1 00:05:01.045 --rc geninfo_unexecuted_blocks=1 00:05:01.045 00:05:01.045 ' 00:05:01.045 15:39:10 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:01.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.045 --rc genhtml_branch_coverage=1 00:05:01.045 --rc genhtml_function_coverage=1 00:05:01.045 --rc genhtml_legend=1 00:05:01.045 --rc geninfo_all_blocks=1 00:05:01.045 --rc geninfo_unexecuted_blocks=1 00:05:01.045 00:05:01.045 ' 00:05:01.045 15:39:10 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:01.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.045 --rc genhtml_branch_coverage=1 00:05:01.045 --rc genhtml_function_coverage=1 00:05:01.045 --rc genhtml_legend=1 00:05:01.045 --rc geninfo_all_blocks=1 00:05:01.045 --rc geninfo_unexecuted_blocks=1 00:05:01.045 00:05:01.045 ' 00:05:01.045 15:39:10 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:01.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.045 --rc genhtml_branch_coverage=1 00:05:01.045 --rc genhtml_function_coverage=1 00:05:01.045 --rc genhtml_legend=1 00:05:01.045 --rc geninfo_all_blocks=1 00:05:01.045 --rc geninfo_unexecuted_blocks=1 00:05:01.045 00:05:01.045 ' 00:05:01.045 15:39:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:01.045 OK 00:05:01.045 15:39:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:01.045 00:05:01.045 real 0m0.197s 00:05:01.045 user 0m0.108s 00:05:01.045 sys 0m0.104s 00:05:01.045 15:39:11 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.045 15:39:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:01.045 ************************************ 00:05:01.045 END TEST rpc_client 00:05:01.045 ************************************ 00:05:01.045 15:39:11 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:01.045 15:39:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.045 15:39:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.045 15:39:11 -- common/autotest_common.sh@10 -- # set +x 00:05:01.045 ************************************ 00:05:01.045 START TEST json_config 00:05:01.045 ************************************ 00:05:01.045 15:39:11 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:01.045 15:39:11 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.045 15:39:11 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.045 15:39:11 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:01.045 15:39:11 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:01.045 15:39:11 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.045 15:39:11 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.045 15:39:11 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.045 15:39:11 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.045 15:39:11 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.045 15:39:11 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.045 15:39:11 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.045 15:39:11 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.045 15:39:11 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.045 15:39:11 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.045 15:39:11 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.045 15:39:11 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:01.045 15:39:11 json_config -- scripts/common.sh@345 -- # : 1 00:05:01.045 15:39:11 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.045 15:39:11 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.045 15:39:11 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:01.045 15:39:11 json_config -- scripts/common.sh@353 -- # local d=1 00:05:01.045 15:39:11 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.045 15:39:11 json_config -- scripts/common.sh@355 -- # echo 1 00:05:01.045 15:39:11 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.045 15:39:11 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:01.045 15:39:11 json_config -- scripts/common.sh@353 -- # local d=2 00:05:01.045 15:39:11 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.045 15:39:11 json_config -- scripts/common.sh@355 -- # echo 2 00:05:01.045 15:39:11 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.045 15:39:11 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.045 15:39:11 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.045 15:39:11 json_config -- scripts/common.sh@368 -- # return 0 00:05:01.045 15:39:11 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.045 15:39:11 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:01.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.045 --rc genhtml_branch_coverage=1 00:05:01.045 --rc genhtml_function_coverage=1 00:05:01.045 --rc genhtml_legend=1 00:05:01.045 --rc geninfo_all_blocks=1 00:05:01.045 --rc geninfo_unexecuted_blocks=1 00:05:01.045 00:05:01.045 ' 00:05:01.045 15:39:11 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:01.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.045 --rc genhtml_branch_coverage=1 00:05:01.045 --rc genhtml_function_coverage=1 00:05:01.045 --rc genhtml_legend=1 00:05:01.045 --rc geninfo_all_blocks=1 00:05:01.045 --rc geninfo_unexecuted_blocks=1 00:05:01.045 00:05:01.045 ' 00:05:01.045 15:39:11 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:01.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.045 --rc genhtml_branch_coverage=1 00:05:01.045 --rc genhtml_function_coverage=1 00:05:01.045 --rc genhtml_legend=1 00:05:01.045 --rc geninfo_all_blocks=1 00:05:01.045 --rc geninfo_unexecuted_blocks=1 00:05:01.045 00:05:01.045 ' 00:05:01.045 15:39:11 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:01.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.045 --rc genhtml_branch_coverage=1 00:05:01.045 --rc genhtml_function_coverage=1 00:05:01.045 --rc genhtml_legend=1 00:05:01.046 --rc geninfo_all_blocks=1 00:05:01.046 --rc geninfo_unexecuted_blocks=1 00:05:01.046 00:05:01.046 ' 00:05:01.046 15:39:11 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:01.305 15:39:11 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.305 15:39:11 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.305 15:39:11 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.305 15:39:11 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.305 15:39:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.305 15:39:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.305 15:39:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.305 15:39:11 json_config -- paths/export.sh@5 -- # export PATH 00:05:01.305 15:39:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@51 -- # : 0 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.305 15:39:11 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:01.305 INFO: JSON configuration test init 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:01.305 15:39:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.305 15:39:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.305 15:39:11 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:01.305 15:39:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.306 15:39:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.306 15:39:11 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:01.306 15:39:11 json_config -- json_config/common.sh@9 -- # local app=target 00:05:01.306 15:39:11 json_config -- json_config/common.sh@10 -- # shift 00:05:01.306 15:39:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.306 15:39:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.306 15:39:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.306 15:39:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.306 15:39:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.306 15:39:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2240299 00:05:01.306 15:39:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.306 Waiting for target to run... 00:05:01.306 15:39:11 json_config -- json_config/common.sh@25 -- # waitforlisten 2240299 /var/tmp/spdk_tgt.sock 00:05:01.306 15:39:11 json_config -- common/autotest_common.sh@831 -- # '[' -z 2240299 ']' 00:05:01.306 15:39:11 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.306 15:39:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:01.306 15:39:11 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.306 15:39:11 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.306 15:39:11 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.306 15:39:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.306 [2024-10-01 15:39:11.328357] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:01.306 [2024-10-01 15:39:11.328403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2240299 ] 00:05:01.871 [2024-10-01 15:39:11.775390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.871 [2024-10-01 15:39:11.861193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.129 15:39:12 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.130 15:39:12 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:02.130 15:39:12 json_config -- json_config/common.sh@26 -- # echo '' 00:05:02.130 00:05:02.130 15:39:12 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:02.130 15:39:12 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:02.130 15:39:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.130 15:39:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.130 15:39:12 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:02.130 15:39:12 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:02.130 15:39:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.130 15:39:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.130 15:39:12 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:02.130 15:39:12 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:02.130 15:39:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:05.534 15:39:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.534 15:39:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:05.534 15:39:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@54 -- # sort 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:05.534 15:39:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.534 15:39:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:05.534 15:39:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.534 15:39:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:05.534 15:39:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:05.534 15:39:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:05.813 MallocForNvmf0 00:05:05.813 15:39:15 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.813 15:39:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.813 MallocForNvmf1 00:05:05.813 15:39:15 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.813 15:39:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:06.071 [2024-10-01 15:39:16.092922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.071 15:39:16 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:06.071 15:39:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:06.329 15:39:16 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:06.329 15:39:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:06.587 15:39:16 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.587 15:39:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.587 15:39:16 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.587 15:39:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.846 [2024-10-01 15:39:16.867339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:06.846 15:39:16 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:06.846 15:39:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.846 15:39:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.846 15:39:16 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:06.846 15:39:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.846 15:39:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.846 15:39:16 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:06.846 15:39:16 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.846 15:39:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:07.104 MallocBdevForConfigChangeCheck 00:05:07.104 15:39:17 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:07.104 15:39:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:07.104 15:39:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.104 15:39:17 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:07.104 15:39:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.361 15:39:17 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:07.361 INFO: shutting down applications... 00:05:07.361 15:39:17 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:07.361 15:39:17 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:07.361 15:39:17 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:07.361 15:39:17 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:09.886 Calling clear_iscsi_subsystem 00:05:09.886 Calling clear_nvmf_subsystem 00:05:09.886 Calling clear_nbd_subsystem 00:05:09.886 Calling clear_ublk_subsystem 00:05:09.886 Calling clear_vhost_blk_subsystem 00:05:09.886 Calling clear_vhost_scsi_subsystem 00:05:09.886 Calling clear_bdev_subsystem 00:05:09.886 15:39:19 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:09.886 15:39:19 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:09.886 15:39:19 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:09.886 15:39:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.886 15:39:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:09.886 15:39:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:09.886 15:39:20 json_config -- json_config/json_config.sh@352 -- # break 00:05:09.886 15:39:20 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:09.886 15:39:20 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:09.886 15:39:20 json_config -- json_config/common.sh@31 -- # local app=target 00:05:09.886 15:39:20 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:09.886 15:39:20 json_config -- json_config/common.sh@35 -- # [[ -n 2240299 ]] 00:05:09.886 15:39:20 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2240299 00:05:09.886 15:39:20 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:09.886 15:39:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.886 15:39:20 json_config -- json_config/common.sh@41 -- # kill -0 2240299 00:05:09.886 15:39:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.454 15:39:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.454 15:39:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.454 15:39:20 json_config -- json_config/common.sh@41 -- # kill -0 2240299 00:05:10.454 15:39:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:10.454 15:39:20 json_config -- json_config/common.sh@43 -- # break 00:05:10.454 15:39:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:10.454 15:39:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:10.454 SPDK target shutdown done 00:05:10.454 15:39:20 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:10.454 INFO: relaunching applications... 00:05:10.454 15:39:20 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.455 15:39:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:10.455 15:39:20 json_config -- json_config/common.sh@10 -- # shift 00:05:10.455 15:39:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.455 15:39:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.455 15:39:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.455 15:39:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.455 15:39:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.455 15:39:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2242028 00:05:10.455 15:39:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.455 Waiting for target to run... 00:05:10.455 15:39:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.455 15:39:20 json_config -- json_config/common.sh@25 -- # waitforlisten 2242028 /var/tmp/spdk_tgt.sock 00:05:10.455 15:39:20 json_config -- common/autotest_common.sh@831 -- # '[' -z 2242028 ']' 00:05:10.455 15:39:20 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.455 15:39:20 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.455 15:39:20 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.455 15:39:20 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.455 15:39:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.455 [2024-10-01 15:39:20.620022] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:10.455 [2024-10-01 15:39:20.620085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242028 ] 00:05:11.022 [2024-10-01 15:39:21.073412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.022 [2024-10-01 15:39:21.165250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.304 [2024-10-01 15:39:24.194837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.304 [2024-10-01 15:39:24.227198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:14.871 15:39:24 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.871 15:39:24 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:14.871 15:39:24 json_config -- json_config/common.sh@26 -- # echo '' 00:05:14.871 00:05:14.871 15:39:24 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:14.871 15:39:24 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:14.871 INFO: Checking if target configuration is the same... 00:05:14.871 15:39:24 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.871 15:39:24 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:14.871 15:39:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.871 + '[' 2 -ne 2 ']' 00:05:14.871 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:14.871 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:14.871 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.871 +++ basename /dev/fd/62 00:05:14.871 ++ mktemp /tmp/62.XXX 00:05:14.871 + tmp_file_1=/tmp/62.P1s 00:05:14.871 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.871 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.871 + tmp_file_2=/tmp/spdk_tgt_config.json.Vp5 00:05:14.871 + ret=0 00:05:14.871 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.128 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.128 + diff -u /tmp/62.P1s /tmp/spdk_tgt_config.json.Vp5 00:05:15.128 + echo 'INFO: JSON config files are the same' 00:05:15.128 INFO: JSON config files are the same 00:05:15.128 + rm /tmp/62.P1s /tmp/spdk_tgt_config.json.Vp5 00:05:15.128 + exit 0 00:05:15.128 15:39:25 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:15.128 15:39:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:15.128 INFO: changing configuration and checking if this can be detected... 00:05:15.128 15:39:25 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:15.128 15:39:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:15.386 15:39:25 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.386 15:39:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:15.386 15:39:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.386 + '[' 2 -ne 2 ']' 00:05:15.386 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:15.386 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:15.386 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:15.386 +++ basename /dev/fd/62 00:05:15.386 ++ mktemp /tmp/62.XXX 00:05:15.386 + tmp_file_1=/tmp/62.rbh 00:05:15.386 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.386 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:15.386 + tmp_file_2=/tmp/spdk_tgt_config.json.mtl 00:05:15.386 + ret=0 00:05:15.386 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.644 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.645 + diff -u /tmp/62.rbh /tmp/spdk_tgt_config.json.mtl 00:05:15.645 + ret=1 00:05:15.645 + echo '=== Start of file: /tmp/62.rbh ===' 00:05:15.645 + cat /tmp/62.rbh 00:05:15.645 + echo '=== End of file: /tmp/62.rbh ===' 00:05:15.645 + echo '' 00:05:15.645 + echo '=== Start of file: /tmp/spdk_tgt_config.json.mtl ===' 00:05:15.645 + cat /tmp/spdk_tgt_config.json.mtl 00:05:15.903 + echo '=== End of file: /tmp/spdk_tgt_config.json.mtl ===' 00:05:15.903 + echo '' 00:05:15.903 + rm /tmp/62.rbh /tmp/spdk_tgt_config.json.mtl 00:05:15.903 + exit 1 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:15.903 INFO: configuration change detected. 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@324 -- # [[ -n 2242028 ]] 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.903 15:39:25 json_config -- json_config/json_config.sh@330 -- # killprocess 2242028 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@950 -- # '[' -z 2242028 ']' 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@954 -- # kill -0 2242028 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@955 -- # uname 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2242028 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2242028' 00:05:15.903 killing process with pid 2242028 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@969 -- # kill 2242028 00:05:15.903 15:39:25 json_config -- common/autotest_common.sh@974 -- # wait 2242028 00:05:17.803 15:39:27 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.803 15:39:27 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:17.803 15:39:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:17.803 15:39:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.063 15:39:27 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:18.063 15:39:27 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:18.063 INFO: Success 00:05:18.063 00:05:18.063 real 0m16.908s 00:05:18.063 user 0m17.285s 00:05:18.063 sys 0m2.783s 00:05:18.063 15:39:27 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.063 15:39:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.063 ************************************ 00:05:18.063 END TEST json_config 00:05:18.063 ************************************ 00:05:18.063 15:39:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:18.063 15:39:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.063 15:39:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.063 15:39:28 -- common/autotest_common.sh@10 -- # set +x 00:05:18.063 ************************************ 00:05:18.063 START TEST json_config_extra_key 00:05:18.063 ************************************ 00:05:18.063 15:39:28 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:18.063 15:39:28 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.063 15:39:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.063 15:39:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:18.063 15:39:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.063 15:39:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:18.063 15:39:28 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.063 15:39:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:18.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.063 --rc genhtml_branch_coverage=1 00:05:18.063 --rc genhtml_function_coverage=1 00:05:18.063 --rc genhtml_legend=1 00:05:18.063 --rc geninfo_all_blocks=1 00:05:18.063 --rc geninfo_unexecuted_blocks=1 00:05:18.063 00:05:18.063 ' 00:05:18.063 15:39:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:18.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.063 --rc genhtml_branch_coverage=1 00:05:18.063 --rc genhtml_function_coverage=1 00:05:18.063 --rc genhtml_legend=1 00:05:18.063 --rc geninfo_all_blocks=1 00:05:18.063 --rc geninfo_unexecuted_blocks=1 00:05:18.063 00:05:18.063 ' 00:05:18.063 15:39:28 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:18.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.063 --rc genhtml_branch_coverage=1 00:05:18.063 --rc genhtml_function_coverage=1 00:05:18.063 --rc genhtml_legend=1 00:05:18.063 --rc geninfo_all_blocks=1 00:05:18.063 --rc geninfo_unexecuted_blocks=1 00:05:18.063 00:05:18.063 ' 00:05:18.063 15:39:28 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:18.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.063 --rc genhtml_branch_coverage=1 00:05:18.063 --rc genhtml_function_coverage=1 00:05:18.063 --rc genhtml_legend=1 00:05:18.063 --rc geninfo_all_blocks=1 00:05:18.063 --rc geninfo_unexecuted_blocks=1 00:05:18.063 00:05:18.063 ' 00:05:18.063 15:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.063 15:39:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:18.063 15:39:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.063 15:39:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.063 15:39:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.063 15:39:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.063 15:39:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.063 15:39:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.063 15:39:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.063 15:39:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.063 15:39:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.063 15:39:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.063 15:39:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:18.063 15:39:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:18.064 15:39:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.064 15:39:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.064 15:39:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.064 15:39:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.064 15:39:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.064 15:39:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.064 15:39:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.064 15:39:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:18.064 15:39:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.064 15:39:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.064 15:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:18.064 15:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:18.064 15:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:18.064 15:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:18.064 15:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:18.064 15:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:18.064 15:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:18.064 15:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:18.064 15:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:18.064 15:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:18.064 15:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:18.064 INFO: launching applications... 00:05:18.064 15:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:18.064 15:39:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:18.064 15:39:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:18.064 15:39:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.064 15:39:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.064 15:39:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.064 15:39:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.064 15:39:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.064 15:39:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2243334 00:05:18.064 15:39:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.064 Waiting for target to run... 00:05:18.064 15:39:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2243334 /var/tmp/spdk_tgt.sock 00:05:18.064 15:39:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:18.064 15:39:28 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2243334 ']' 00:05:18.064 15:39:28 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.064 15:39:28 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.064 15:39:28 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.064 15:39:28 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.064 15:39:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.323 [2024-10-01 15:39:28.289872] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:18.323 [2024-10-01 15:39:28.289918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2243334 ] 00:05:18.582 [2024-10-01 15:39:28.575880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.582 [2024-10-01 15:39:28.644899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.147 15:39:29 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.147 15:39:29 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:19.147 15:39:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:19.147 00:05:19.147 15:39:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:19.147 INFO: shutting down applications... 00:05:19.147 15:39:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:19.147 15:39:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:19.147 15:39:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.147 15:39:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2243334 ]] 00:05:19.147 15:39:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2243334 00:05:19.147 15:39:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.147 15:39:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.147 15:39:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2243334 00:05:19.147 15:39:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.724 15:39:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.724 15:39:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.724 15:39:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2243334 00:05:19.724 15:39:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:19.724 15:39:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:19.724 15:39:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:19.724 15:39:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:19.724 SPDK target shutdown done 00:05:19.724 15:39:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:19.724 Success 00:05:19.724 00:05:19.724 real 0m1.549s 00:05:19.724 user 0m1.335s 00:05:19.724 sys 0m0.393s 00:05:19.724 15:39:29 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.724 15:39:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.724 ************************************ 00:05:19.724 END TEST json_config_extra_key 00:05:19.724 ************************************ 00:05:19.724 15:39:29 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:19.724 15:39:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.724 15:39:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.724 15:39:29 -- common/autotest_common.sh@10 -- # set +x 00:05:19.724 ************************************ 00:05:19.724 START TEST alias_rpc 00:05:19.724 ************************************ 00:05:19.724 15:39:29 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:19.724 * Looking for test storage... 00:05:19.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:19.724 15:39:29 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:19.724 15:39:29 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:19.724 15:39:29 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:19.724 15:39:29 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.724 15:39:29 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:19.724 15:39:29 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.724 15:39:29 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:19.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.724 --rc genhtml_branch_coverage=1 00:05:19.724 --rc genhtml_function_coverage=1 00:05:19.724 --rc genhtml_legend=1 00:05:19.724 --rc geninfo_all_blocks=1 00:05:19.724 --rc geninfo_unexecuted_blocks=1 00:05:19.724 00:05:19.724 ' 00:05:19.725 15:39:29 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:19.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.725 --rc genhtml_branch_coverage=1 00:05:19.725 --rc genhtml_function_coverage=1 00:05:19.725 --rc genhtml_legend=1 00:05:19.725 --rc geninfo_all_blocks=1 00:05:19.725 --rc geninfo_unexecuted_blocks=1 00:05:19.725 00:05:19.725 ' 00:05:19.725 15:39:29 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:19.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.725 --rc genhtml_branch_coverage=1 00:05:19.725 --rc genhtml_function_coverage=1 00:05:19.725 --rc genhtml_legend=1 00:05:19.725 --rc geninfo_all_blocks=1 00:05:19.725 --rc geninfo_unexecuted_blocks=1 00:05:19.725 00:05:19.725 ' 00:05:19.725 15:39:29 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:19.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.725 --rc genhtml_branch_coverage=1 00:05:19.725 --rc genhtml_function_coverage=1 00:05:19.725 --rc genhtml_legend=1 00:05:19.725 --rc geninfo_all_blocks=1 00:05:19.725 --rc geninfo_unexecuted_blocks=1 00:05:19.725 00:05:19.725 ' 00:05:19.725 15:39:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:19.725 15:39:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2243617 00:05:19.725 15:39:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.725 15:39:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2243617 00:05:19.725 15:39:29 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2243617 ']' 00:05:19.725 15:39:29 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.725 15:39:29 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.725 15:39:29 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.725 15:39:29 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.725 15:39:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.725 [2024-10-01 15:39:29.906609] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:19.725 [2024-10-01 15:39:29.906659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2243617 ] 00:05:19.984 [2024-10-01 15:39:29.965885] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.984 [2024-10-01 15:39:30.048085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.550 15:39:30 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.550 15:39:30 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:20.550 15:39:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:20.809 15:39:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2243617 00:05:20.809 15:39:30 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2243617 ']' 00:05:20.809 15:39:30 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2243617 00:05:20.809 15:39:30 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:20.809 15:39:30 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.809 15:39:30 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2243617 00:05:21.067 15:39:31 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:21.067 15:39:31 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:21.067 15:39:31 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2243617' 00:05:21.067 killing process with pid 2243617 00:05:21.067 15:39:31 alias_rpc -- common/autotest_common.sh@969 -- # kill 2243617 00:05:21.067 15:39:31 alias_rpc -- common/autotest_common.sh@974 -- # wait 2243617 00:05:21.326 00:05:21.326 real 0m1.653s 00:05:21.326 user 0m1.794s 00:05:21.326 sys 0m0.458s 00:05:21.326 15:39:31 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.326 15:39:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.326 ************************************ 00:05:21.326 END TEST alias_rpc 00:05:21.326 ************************************ 00:05:21.326 15:39:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:21.326 15:39:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:21.326 15:39:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.326 15:39:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.326 15:39:31 -- common/autotest_common.sh@10 -- # set +x 00:05:21.326 ************************************ 00:05:21.326 START TEST spdkcli_tcp 00:05:21.326 ************************************ 00:05:21.326 15:39:31 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:21.326 * Looking for test storage... 00:05:21.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:21.326 15:39:31 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:21.326 15:39:31 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:21.326 15:39:31 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:21.585 15:39:31 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.585 15:39:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:21.585 15:39:31 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.585 15:39:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:21.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.585 --rc genhtml_branch_coverage=1 00:05:21.585 --rc genhtml_function_coverage=1 00:05:21.585 --rc genhtml_legend=1 00:05:21.585 --rc geninfo_all_blocks=1 00:05:21.585 --rc geninfo_unexecuted_blocks=1 00:05:21.585 00:05:21.585 ' 00:05:21.585 15:39:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:21.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.585 --rc genhtml_branch_coverage=1 00:05:21.585 --rc genhtml_function_coverage=1 00:05:21.585 --rc genhtml_legend=1 00:05:21.585 --rc geninfo_all_blocks=1 00:05:21.585 --rc geninfo_unexecuted_blocks=1 00:05:21.585 00:05:21.585 ' 00:05:21.585 15:39:31 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:21.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.585 --rc genhtml_branch_coverage=1 00:05:21.585 --rc genhtml_function_coverage=1 00:05:21.585 --rc genhtml_legend=1 00:05:21.585 --rc geninfo_all_blocks=1 00:05:21.585 --rc geninfo_unexecuted_blocks=1 00:05:21.585 00:05:21.585 ' 00:05:21.585 15:39:31 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:21.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.585 --rc genhtml_branch_coverage=1 00:05:21.585 --rc genhtml_function_coverage=1 00:05:21.585 --rc genhtml_legend=1 00:05:21.585 --rc geninfo_all_blocks=1 00:05:21.585 --rc geninfo_unexecuted_blocks=1 00:05:21.585 00:05:21.585 ' 00:05:21.585 15:39:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:21.585 15:39:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:21.585 15:39:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:21.585 15:39:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:21.585 15:39:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:21.585 15:39:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:21.585 15:39:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:21.585 15:39:31 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:21.585 15:39:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.586 15:39:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2244116 00:05:21.586 15:39:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2244116 00:05:21.586 15:39:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:21.586 15:39:31 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2244116 ']' 00:05:21.586 15:39:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.586 15:39:31 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.586 15:39:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.586 15:39:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.586 15:39:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.586 [2024-10-01 15:39:31.637478] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:21.586 [2024-10-01 15:39:31.637531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244116 ] 00:05:21.586 [2024-10-01 15:39:31.705690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.844 [2024-10-01 15:39:31.787597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.844 [2024-10-01 15:39:31.787600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.411 15:39:32 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.411 15:39:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:22.411 15:39:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2244148 00:05:22.411 15:39:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:22.411 15:39:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:22.670 [ 00:05:22.670 "bdev_malloc_delete", 00:05:22.670 "bdev_malloc_create", 00:05:22.670 "bdev_null_resize", 00:05:22.670 "bdev_null_delete", 00:05:22.670 "bdev_null_create", 00:05:22.670 "bdev_nvme_cuse_unregister", 00:05:22.670 "bdev_nvme_cuse_register", 00:05:22.670 "bdev_opal_new_user", 00:05:22.670 "bdev_opal_set_lock_state", 00:05:22.670 "bdev_opal_delete", 00:05:22.670 "bdev_opal_get_info", 00:05:22.670 "bdev_opal_create", 00:05:22.670 "bdev_nvme_opal_revert", 00:05:22.670 "bdev_nvme_opal_init", 00:05:22.670 "bdev_nvme_send_cmd", 00:05:22.670 "bdev_nvme_set_keys", 00:05:22.670 "bdev_nvme_get_path_iostat", 00:05:22.670 "bdev_nvme_get_mdns_discovery_info", 00:05:22.670 "bdev_nvme_stop_mdns_discovery", 00:05:22.670 "bdev_nvme_start_mdns_discovery", 00:05:22.670 "bdev_nvme_set_multipath_policy", 00:05:22.670 "bdev_nvme_set_preferred_path", 00:05:22.670 "bdev_nvme_get_io_paths", 00:05:22.670 "bdev_nvme_remove_error_injection", 00:05:22.670 "bdev_nvme_add_error_injection", 00:05:22.670 "bdev_nvme_get_discovery_info", 00:05:22.670 "bdev_nvme_stop_discovery", 00:05:22.670 "bdev_nvme_start_discovery", 00:05:22.670 "bdev_nvme_get_controller_health_info", 00:05:22.670 "bdev_nvme_disable_controller", 00:05:22.670 "bdev_nvme_enable_controller", 00:05:22.670 "bdev_nvme_reset_controller", 00:05:22.670 "bdev_nvme_get_transport_statistics", 00:05:22.670 "bdev_nvme_apply_firmware", 00:05:22.670 "bdev_nvme_detach_controller", 00:05:22.670 "bdev_nvme_get_controllers", 00:05:22.670 "bdev_nvme_attach_controller", 00:05:22.670 "bdev_nvme_set_hotplug", 00:05:22.670 "bdev_nvme_set_options", 00:05:22.670 "bdev_passthru_delete", 00:05:22.670 "bdev_passthru_create", 00:05:22.670 "bdev_lvol_set_parent_bdev", 00:05:22.670 "bdev_lvol_set_parent", 00:05:22.670 "bdev_lvol_check_shallow_copy", 00:05:22.670 "bdev_lvol_start_shallow_copy", 00:05:22.670 "bdev_lvol_grow_lvstore", 00:05:22.670 "bdev_lvol_get_lvols", 00:05:22.670 "bdev_lvol_get_lvstores", 00:05:22.670 "bdev_lvol_delete", 00:05:22.670 "bdev_lvol_set_read_only", 00:05:22.670 "bdev_lvol_resize", 00:05:22.670 "bdev_lvol_decouple_parent", 00:05:22.670 "bdev_lvol_inflate", 00:05:22.670 "bdev_lvol_rename", 00:05:22.670 "bdev_lvol_clone_bdev", 00:05:22.670 "bdev_lvol_clone", 00:05:22.670 "bdev_lvol_snapshot", 00:05:22.670 "bdev_lvol_create", 00:05:22.670 "bdev_lvol_delete_lvstore", 00:05:22.670 "bdev_lvol_rename_lvstore", 00:05:22.670 "bdev_lvol_create_lvstore", 00:05:22.670 "bdev_raid_set_options", 00:05:22.670 "bdev_raid_remove_base_bdev", 00:05:22.670 "bdev_raid_add_base_bdev", 00:05:22.670 "bdev_raid_delete", 00:05:22.670 "bdev_raid_create", 00:05:22.670 "bdev_raid_get_bdevs", 00:05:22.670 "bdev_error_inject_error", 00:05:22.670 "bdev_error_delete", 00:05:22.670 "bdev_error_create", 00:05:22.670 "bdev_split_delete", 00:05:22.670 "bdev_split_create", 00:05:22.670 "bdev_delay_delete", 00:05:22.670 "bdev_delay_create", 00:05:22.670 "bdev_delay_update_latency", 00:05:22.670 "bdev_zone_block_delete", 00:05:22.670 "bdev_zone_block_create", 00:05:22.670 "blobfs_create", 00:05:22.670 "blobfs_detect", 00:05:22.670 "blobfs_set_cache_size", 00:05:22.670 "bdev_aio_delete", 00:05:22.670 "bdev_aio_rescan", 00:05:22.670 "bdev_aio_create", 00:05:22.670 "bdev_ftl_set_property", 00:05:22.670 "bdev_ftl_get_properties", 00:05:22.670 "bdev_ftl_get_stats", 00:05:22.670 "bdev_ftl_unmap", 00:05:22.670 "bdev_ftl_unload", 00:05:22.670 "bdev_ftl_delete", 00:05:22.670 "bdev_ftl_load", 00:05:22.670 "bdev_ftl_create", 00:05:22.670 "bdev_virtio_attach_controller", 00:05:22.670 "bdev_virtio_scsi_get_devices", 00:05:22.670 "bdev_virtio_detach_controller", 00:05:22.670 "bdev_virtio_blk_set_hotplug", 00:05:22.670 "bdev_iscsi_delete", 00:05:22.670 "bdev_iscsi_create", 00:05:22.670 "bdev_iscsi_set_options", 00:05:22.670 "accel_error_inject_error", 00:05:22.670 "ioat_scan_accel_module", 00:05:22.670 "dsa_scan_accel_module", 00:05:22.670 "iaa_scan_accel_module", 00:05:22.670 "vfu_virtio_create_fs_endpoint", 00:05:22.670 "vfu_virtio_create_scsi_endpoint", 00:05:22.670 "vfu_virtio_scsi_remove_target", 00:05:22.670 "vfu_virtio_scsi_add_target", 00:05:22.670 "vfu_virtio_create_blk_endpoint", 00:05:22.670 "vfu_virtio_delete_endpoint", 00:05:22.670 "keyring_file_remove_key", 00:05:22.670 "keyring_file_add_key", 00:05:22.670 "keyring_linux_set_options", 00:05:22.670 "fsdev_aio_delete", 00:05:22.670 "fsdev_aio_create", 00:05:22.670 "iscsi_get_histogram", 00:05:22.670 "iscsi_enable_histogram", 00:05:22.670 "iscsi_set_options", 00:05:22.670 "iscsi_get_auth_groups", 00:05:22.670 "iscsi_auth_group_remove_secret", 00:05:22.670 "iscsi_auth_group_add_secret", 00:05:22.671 "iscsi_delete_auth_group", 00:05:22.671 "iscsi_create_auth_group", 00:05:22.671 "iscsi_set_discovery_auth", 00:05:22.671 "iscsi_get_options", 00:05:22.671 "iscsi_target_node_request_logout", 00:05:22.671 "iscsi_target_node_set_redirect", 00:05:22.671 "iscsi_target_node_set_auth", 00:05:22.671 "iscsi_target_node_add_lun", 00:05:22.671 "iscsi_get_stats", 00:05:22.671 "iscsi_get_connections", 00:05:22.671 "iscsi_portal_group_set_auth", 00:05:22.671 "iscsi_start_portal_group", 00:05:22.671 "iscsi_delete_portal_group", 00:05:22.671 "iscsi_create_portal_group", 00:05:22.671 "iscsi_get_portal_groups", 00:05:22.671 "iscsi_delete_target_node", 00:05:22.671 "iscsi_target_node_remove_pg_ig_maps", 00:05:22.671 "iscsi_target_node_add_pg_ig_maps", 00:05:22.671 "iscsi_create_target_node", 00:05:22.671 "iscsi_get_target_nodes", 00:05:22.671 "iscsi_delete_initiator_group", 00:05:22.671 "iscsi_initiator_group_remove_initiators", 00:05:22.671 "iscsi_initiator_group_add_initiators", 00:05:22.671 "iscsi_create_initiator_group", 00:05:22.671 "iscsi_get_initiator_groups", 00:05:22.671 "nvmf_set_crdt", 00:05:22.671 "nvmf_set_config", 00:05:22.671 "nvmf_set_max_subsystems", 00:05:22.671 "nvmf_stop_mdns_prr", 00:05:22.671 "nvmf_publish_mdns_prr", 00:05:22.671 "nvmf_subsystem_get_listeners", 00:05:22.671 "nvmf_subsystem_get_qpairs", 00:05:22.671 "nvmf_subsystem_get_controllers", 00:05:22.671 "nvmf_get_stats", 00:05:22.671 "nvmf_get_transports", 00:05:22.671 "nvmf_create_transport", 00:05:22.671 "nvmf_get_targets", 00:05:22.671 "nvmf_delete_target", 00:05:22.671 "nvmf_create_target", 00:05:22.671 "nvmf_subsystem_allow_any_host", 00:05:22.671 "nvmf_subsystem_set_keys", 00:05:22.671 "nvmf_subsystem_remove_host", 00:05:22.671 "nvmf_subsystem_add_host", 00:05:22.671 "nvmf_ns_remove_host", 00:05:22.671 "nvmf_ns_add_host", 00:05:22.671 "nvmf_subsystem_remove_ns", 00:05:22.671 "nvmf_subsystem_set_ns_ana_group", 00:05:22.671 "nvmf_subsystem_add_ns", 00:05:22.671 "nvmf_subsystem_listener_set_ana_state", 00:05:22.671 "nvmf_discovery_get_referrals", 00:05:22.671 "nvmf_discovery_remove_referral", 00:05:22.671 "nvmf_discovery_add_referral", 00:05:22.671 "nvmf_subsystem_remove_listener", 00:05:22.671 "nvmf_subsystem_add_listener", 00:05:22.671 "nvmf_delete_subsystem", 00:05:22.671 "nvmf_create_subsystem", 00:05:22.671 "nvmf_get_subsystems", 00:05:22.671 "env_dpdk_get_mem_stats", 00:05:22.671 "nbd_get_disks", 00:05:22.671 "nbd_stop_disk", 00:05:22.671 "nbd_start_disk", 00:05:22.671 "ublk_recover_disk", 00:05:22.671 "ublk_get_disks", 00:05:22.671 "ublk_stop_disk", 00:05:22.671 "ublk_start_disk", 00:05:22.671 "ublk_destroy_target", 00:05:22.671 "ublk_create_target", 00:05:22.671 "virtio_blk_create_transport", 00:05:22.671 "virtio_blk_get_transports", 00:05:22.671 "vhost_controller_set_coalescing", 00:05:22.671 "vhost_get_controllers", 00:05:22.671 "vhost_delete_controller", 00:05:22.671 "vhost_create_blk_controller", 00:05:22.671 "vhost_scsi_controller_remove_target", 00:05:22.671 "vhost_scsi_controller_add_target", 00:05:22.671 "vhost_start_scsi_controller", 00:05:22.671 "vhost_create_scsi_controller", 00:05:22.671 "thread_set_cpumask", 00:05:22.671 "scheduler_set_options", 00:05:22.671 "framework_get_governor", 00:05:22.671 "framework_get_scheduler", 00:05:22.671 "framework_set_scheduler", 00:05:22.671 "framework_get_reactors", 00:05:22.671 "thread_get_io_channels", 00:05:22.671 "thread_get_pollers", 00:05:22.671 "thread_get_stats", 00:05:22.671 "framework_monitor_context_switch", 00:05:22.671 "spdk_kill_instance", 00:05:22.671 "log_enable_timestamps", 00:05:22.671 "log_get_flags", 00:05:22.671 "log_clear_flag", 00:05:22.671 "log_set_flag", 00:05:22.671 "log_get_level", 00:05:22.671 "log_set_level", 00:05:22.671 "log_get_print_level", 00:05:22.671 "log_set_print_level", 00:05:22.671 "framework_enable_cpumask_locks", 00:05:22.671 "framework_disable_cpumask_locks", 00:05:22.671 "framework_wait_init", 00:05:22.671 "framework_start_init", 00:05:22.671 "scsi_get_devices", 00:05:22.671 "bdev_get_histogram", 00:05:22.671 "bdev_enable_histogram", 00:05:22.671 "bdev_set_qos_limit", 00:05:22.671 "bdev_set_qd_sampling_period", 00:05:22.671 "bdev_get_bdevs", 00:05:22.671 "bdev_reset_iostat", 00:05:22.671 "bdev_get_iostat", 00:05:22.671 "bdev_examine", 00:05:22.671 "bdev_wait_for_examine", 00:05:22.671 "bdev_set_options", 00:05:22.671 "accel_get_stats", 00:05:22.671 "accel_set_options", 00:05:22.671 "accel_set_driver", 00:05:22.671 "accel_crypto_key_destroy", 00:05:22.671 "accel_crypto_keys_get", 00:05:22.671 "accel_crypto_key_create", 00:05:22.671 "accel_assign_opc", 00:05:22.671 "accel_get_module_info", 00:05:22.671 "accel_get_opc_assignments", 00:05:22.671 "vmd_rescan", 00:05:22.671 "vmd_remove_device", 00:05:22.671 "vmd_enable", 00:05:22.671 "sock_get_default_impl", 00:05:22.671 "sock_set_default_impl", 00:05:22.671 "sock_impl_set_options", 00:05:22.671 "sock_impl_get_options", 00:05:22.671 "iobuf_get_stats", 00:05:22.671 "iobuf_set_options", 00:05:22.671 "keyring_get_keys", 00:05:22.671 "vfu_tgt_set_base_path", 00:05:22.671 "framework_get_pci_devices", 00:05:22.671 "framework_get_config", 00:05:22.671 "framework_get_subsystems", 00:05:22.671 "fsdev_set_opts", 00:05:22.671 "fsdev_get_opts", 00:05:22.671 "trace_get_info", 00:05:22.671 "trace_get_tpoint_group_mask", 00:05:22.671 "trace_disable_tpoint_group", 00:05:22.671 "trace_enable_tpoint_group", 00:05:22.671 "trace_clear_tpoint_mask", 00:05:22.671 "trace_set_tpoint_mask", 00:05:22.671 "notify_get_notifications", 00:05:22.671 "notify_get_types", 00:05:22.671 "spdk_get_version", 00:05:22.671 "rpc_get_methods" 00:05:22.671 ] 00:05:22.671 15:39:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:22.671 15:39:32 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.671 15:39:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.671 15:39:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:22.671 15:39:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2244116 00:05:22.671 15:39:32 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2244116 ']' 00:05:22.671 15:39:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2244116 00:05:22.671 15:39:32 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:22.671 15:39:32 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.671 15:39:32 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2244116 00:05:22.671 15:39:32 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.671 15:39:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.671 15:39:32 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2244116' 00:05:22.671 killing process with pid 2244116 00:05:22.671 15:39:32 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2244116 00:05:22.671 15:39:32 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2244116 00:05:22.930 00:05:22.930 real 0m1.680s 00:05:22.930 user 0m3.066s 00:05:22.930 sys 0m0.486s 00:05:22.930 15:39:33 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.930 15:39:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.930 ************************************ 00:05:22.930 END TEST spdkcli_tcp 00:05:22.930 ************************************ 00:05:22.930 15:39:33 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.930 15:39:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.930 15:39:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.930 15:39:33 -- common/autotest_common.sh@10 -- # set +x 00:05:23.189 ************************************ 00:05:23.189 START TEST dpdk_mem_utility 00:05:23.189 ************************************ 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.189 * Looking for test storage... 00:05:23.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.189 15:39:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:23.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.189 --rc genhtml_branch_coverage=1 00:05:23.189 --rc genhtml_function_coverage=1 00:05:23.189 --rc genhtml_legend=1 00:05:23.189 --rc geninfo_all_blocks=1 00:05:23.189 --rc geninfo_unexecuted_blocks=1 00:05:23.189 00:05:23.189 ' 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:23.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.189 --rc genhtml_branch_coverage=1 00:05:23.189 --rc genhtml_function_coverage=1 00:05:23.189 --rc genhtml_legend=1 00:05:23.189 --rc geninfo_all_blocks=1 00:05:23.189 --rc geninfo_unexecuted_blocks=1 00:05:23.189 00:05:23.189 ' 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:23.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.189 --rc genhtml_branch_coverage=1 00:05:23.189 --rc genhtml_function_coverage=1 00:05:23.189 --rc genhtml_legend=1 00:05:23.189 --rc geninfo_all_blocks=1 00:05:23.189 --rc geninfo_unexecuted_blocks=1 00:05:23.189 00:05:23.189 ' 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:23.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.189 --rc genhtml_branch_coverage=1 00:05:23.189 --rc genhtml_function_coverage=1 00:05:23.189 --rc genhtml_legend=1 00:05:23.189 --rc geninfo_all_blocks=1 00:05:23.189 --rc geninfo_unexecuted_blocks=1 00:05:23.189 00:05:23.189 ' 00:05:23.189 15:39:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:23.189 15:39:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2244443 00:05:23.189 15:39:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2244443 00:05:23.189 15:39:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2244443 ']' 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.189 15:39:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.189 [2024-10-01 15:39:33.378273] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:23.189 [2024-10-01 15:39:33.378325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244443 ] 00:05:23.447 [2024-10-01 15:39:33.448215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.447 [2024-10-01 15:39:33.519453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.382 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.382 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:24.382 15:39:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:24.382 15:39:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:24.382 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.382 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.382 { 00:05:24.382 "filename": "/tmp/spdk_mem_dump.txt" 00:05:24.382 } 00:05:24.382 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.382 15:39:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:24.382 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:24.382 1 heaps totaling size 860.000000 MiB 00:05:24.382 size: 860.000000 MiB heap id: 0 00:05:24.382 end heaps---------- 00:05:24.382 9 mempools totaling size 642.649841 MiB 00:05:24.382 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:24.382 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:24.382 size: 92.545471 MiB name: bdev_io_2244443 00:05:24.382 size: 51.011292 MiB name: evtpool_2244443 00:05:24.382 size: 50.003479 MiB name: msgpool_2244443 00:05:24.382 size: 36.509338 MiB name: fsdev_io_2244443 00:05:24.382 size: 21.763794 MiB name: PDU_Pool 00:05:24.382 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:24.382 size: 0.026123 MiB name: Session_Pool 00:05:24.382 end mempools------- 00:05:24.382 6 memzones totaling size 4.142822 MiB 00:05:24.382 size: 1.000366 MiB name: RG_ring_0_2244443 00:05:24.382 size: 1.000366 MiB name: RG_ring_1_2244443 00:05:24.382 size: 1.000366 MiB name: RG_ring_4_2244443 00:05:24.382 size: 1.000366 MiB name: RG_ring_5_2244443 00:05:24.382 size: 0.125366 MiB name: RG_ring_2_2244443 00:05:24.382 size: 0.015991 MiB name: RG_ring_3_2244443 00:05:24.382 end memzones------- 00:05:24.382 15:39:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:24.382 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:24.382 list of free elements. size: 13.984680 MiB 00:05:24.382 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:24.382 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:24.382 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:24.382 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:24.382 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:24.382 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:24.382 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:24.382 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:24.382 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:24.382 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:24.382 element at address: 0x200003e00000 with size: 0.495605 MiB 00:05:24.382 element at address: 0x20000d800000 with size: 0.490723 MiB 00:05:24.382 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:24.382 element at address: 0x200007000000 with size: 0.481934 MiB 00:05:24.382 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:24.382 element at address: 0x200003a00000 with size: 0.354858 MiB 00:05:24.382 list of standard malloc elements. size: 199.218628 MiB 00:05:24.382 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:24.382 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:24.382 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:24.382 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:24.382 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:24.382 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:24.382 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:24.382 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:24.382 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:24.382 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:24.382 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:24.382 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:24.382 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:24.382 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:24.382 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:24.382 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:24.382 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:05:24.382 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:24.382 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:05:24.382 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:24.382 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:24.382 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:24.382 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:24.382 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:24.382 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:24.382 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:24.382 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:24.382 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:24.382 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:24.382 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:24.382 list of memzone associated elements. size: 646.796692 MiB 00:05:24.382 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:24.382 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:24.382 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:24.382 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:24.382 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:24.382 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2244443_0 00:05:24.382 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:24.382 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2244443_0 00:05:24.382 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:24.382 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2244443_0 00:05:24.382 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:24.382 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2244443_0 00:05:24.382 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:24.382 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:24.382 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:24.382 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:24.382 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:24.382 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2244443 00:05:24.382 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:24.382 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2244443 00:05:24.382 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:24.382 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2244443 00:05:24.382 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:24.382 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:24.382 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:24.382 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:24.382 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:24.382 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:24.382 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:24.382 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:24.382 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:24.382 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2244443 00:05:24.382 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:24.382 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2244443 00:05:24.382 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:24.382 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2244443 00:05:24.383 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:24.383 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2244443 00:05:24.383 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:24.383 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2244443 00:05:24.383 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:24.383 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2244443 00:05:24.383 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:24.383 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:24.383 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:24.383 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:24.383 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:24.383 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:24.383 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:05:24.383 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2244443 00:05:24.383 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:24.383 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:24.383 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:24.383 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:24.383 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:05:24.383 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2244443 00:05:24.383 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:24.383 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:24.383 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:24.383 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2244443 00:05:24.383 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:24.383 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2244443 00:05:24.383 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:05:24.383 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2244443 00:05:24.383 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:24.383 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:24.383 15:39:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:24.383 15:39:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2244443 00:05:24.383 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2244443 ']' 00:05:24.383 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2244443 00:05:24.383 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:24.383 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.383 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2244443 00:05:24.383 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.383 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.383 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2244443' 00:05:24.383 killing process with pid 2244443 00:05:24.383 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2244443 00:05:24.383 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2244443 00:05:24.642 00:05:24.642 real 0m1.555s 00:05:24.642 user 0m1.654s 00:05:24.642 sys 0m0.432s 00:05:24.642 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.642 15:39:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.642 ************************************ 00:05:24.642 END TEST dpdk_mem_utility 00:05:24.642 ************************************ 00:05:24.642 15:39:34 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:24.642 15:39:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.642 15:39:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.642 15:39:34 -- common/autotest_common.sh@10 -- # set +x 00:05:24.642 ************************************ 00:05:24.642 START TEST event 00:05:24.642 ************************************ 00:05:24.642 15:39:34 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:24.901 * Looking for test storage... 00:05:24.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:24.901 15:39:34 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:24.901 15:39:34 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:24.901 15:39:34 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:24.901 15:39:34 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:24.901 15:39:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.901 15:39:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.901 15:39:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.901 15:39:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.901 15:39:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.901 15:39:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.901 15:39:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.901 15:39:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.901 15:39:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.901 15:39:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.901 15:39:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.901 15:39:34 event -- scripts/common.sh@344 -- # case "$op" in 00:05:24.901 15:39:34 event -- scripts/common.sh@345 -- # : 1 00:05:24.901 15:39:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.901 15:39:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.901 15:39:34 event -- scripts/common.sh@365 -- # decimal 1 00:05:24.901 15:39:34 event -- scripts/common.sh@353 -- # local d=1 00:05:24.901 15:39:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.901 15:39:34 event -- scripts/common.sh@355 -- # echo 1 00:05:24.902 15:39:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.902 15:39:34 event -- scripts/common.sh@366 -- # decimal 2 00:05:24.902 15:39:34 event -- scripts/common.sh@353 -- # local d=2 00:05:24.902 15:39:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.902 15:39:34 event -- scripts/common.sh@355 -- # echo 2 00:05:24.902 15:39:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.902 15:39:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.902 15:39:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.902 15:39:34 event -- scripts/common.sh@368 -- # return 0 00:05:24.902 15:39:34 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.902 15:39:34 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:24.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.902 --rc genhtml_branch_coverage=1 00:05:24.902 --rc genhtml_function_coverage=1 00:05:24.902 --rc genhtml_legend=1 00:05:24.902 --rc geninfo_all_blocks=1 00:05:24.902 --rc geninfo_unexecuted_blocks=1 00:05:24.902 00:05:24.902 ' 00:05:24.902 15:39:34 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:24.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.902 --rc genhtml_branch_coverage=1 00:05:24.902 --rc genhtml_function_coverage=1 00:05:24.902 --rc genhtml_legend=1 00:05:24.902 --rc geninfo_all_blocks=1 00:05:24.902 --rc geninfo_unexecuted_blocks=1 00:05:24.902 00:05:24.902 ' 00:05:24.902 15:39:34 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:24.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.902 --rc genhtml_branch_coverage=1 00:05:24.902 --rc genhtml_function_coverage=1 00:05:24.902 --rc genhtml_legend=1 00:05:24.902 --rc geninfo_all_blocks=1 00:05:24.902 --rc geninfo_unexecuted_blocks=1 00:05:24.902 00:05:24.902 ' 00:05:24.902 15:39:34 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:24.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.902 --rc genhtml_branch_coverage=1 00:05:24.902 --rc genhtml_function_coverage=1 00:05:24.902 --rc genhtml_legend=1 00:05:24.902 --rc geninfo_all_blocks=1 00:05:24.902 --rc geninfo_unexecuted_blocks=1 00:05:24.902 00:05:24.902 ' 00:05:24.902 15:39:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:24.902 15:39:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:24.902 15:39:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:24.902 15:39:34 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:24.902 15:39:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.902 15:39:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.902 ************************************ 00:05:24.902 START TEST event_perf 00:05:24.902 ************************************ 00:05:24.902 15:39:34 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:24.902 Running I/O for 1 seconds...[2024-10-01 15:39:35.010642] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:24.902 [2024-10-01 15:39:35.010701] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244745 ] 00:05:24.902 [2024-10-01 15:39:35.079935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.161 [2024-10-01 15:39:35.155793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.161 [2024-10-01 15:39:35.155903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.161 [2024-10-01 15:39:35.155991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.161 [2024-10-01 15:39:35.155993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.096 Running I/O for 1 seconds... 00:05:26.096 lcore 0: 210260 00:05:26.096 lcore 1: 210260 00:05:26.096 lcore 2: 210260 00:05:26.096 lcore 3: 210259 00:05:26.096 done. 00:05:26.096 00:05:26.096 real 0m1.234s 00:05:26.096 user 0m4.141s 00:05:26.096 sys 0m0.089s 00:05:26.096 15:39:36 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.096 15:39:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.096 ************************************ 00:05:26.096 END TEST event_perf 00:05:26.096 ************************************ 00:05:26.096 15:39:36 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:26.096 15:39:36 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:26.096 15:39:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.096 15:39:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.355 ************************************ 00:05:26.355 START TEST event_reactor 00:05:26.355 ************************************ 00:05:26.355 15:39:36 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:26.355 [2024-10-01 15:39:36.320156] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:26.355 [2024-10-01 15:39:36.320227] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245001 ] 00:05:26.355 [2024-10-01 15:39:36.393775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.355 [2024-10-01 15:39:36.467468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.730 test_start 00:05:27.730 oneshot 00:05:27.730 tick 100 00:05:27.730 tick 100 00:05:27.730 tick 250 00:05:27.730 tick 100 00:05:27.730 tick 100 00:05:27.730 tick 250 00:05:27.730 tick 100 00:05:27.730 tick 500 00:05:27.730 tick 100 00:05:27.730 tick 100 00:05:27.730 tick 250 00:05:27.730 tick 100 00:05:27.730 tick 100 00:05:27.730 test_end 00:05:27.730 00:05:27.730 real 0m1.239s 00:05:27.730 user 0m1.146s 00:05:27.730 sys 0m0.088s 00:05:27.730 15:39:37 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.730 15:39:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:27.730 ************************************ 00:05:27.730 END TEST event_reactor 00:05:27.730 ************************************ 00:05:27.730 15:39:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:27.730 15:39:37 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:27.730 15:39:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.730 15:39:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.730 ************************************ 00:05:27.730 START TEST event_reactor_perf 00:05:27.730 ************************************ 00:05:27.730 15:39:37 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:27.730 [2024-10-01 15:39:37.632131] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:27.730 [2024-10-01 15:39:37.632193] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245249 ] 00:05:27.730 [2024-10-01 15:39:37.705095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.730 [2024-10-01 15:39:37.779225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.664 test_start 00:05:28.664 test_end 00:05:28.664 Performance: 506401 events per second 00:05:28.664 00:05:28.664 real 0m1.240s 00:05:28.664 user 0m1.151s 00:05:28.664 sys 0m0.084s 00:05:28.664 15:39:38 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.664 15:39:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.664 ************************************ 00:05:28.664 END TEST event_reactor_perf 00:05:28.664 ************************************ 00:05:28.923 15:39:38 event -- event/event.sh@49 -- # uname -s 00:05:28.923 15:39:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:28.923 15:39:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:28.923 15:39:38 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.923 15:39:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.923 15:39:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.923 ************************************ 00:05:28.923 START TEST event_scheduler 00:05:28.923 ************************************ 00:05:28.923 15:39:38 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:28.923 * Looking for test storage... 00:05:28.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:28.923 15:39:39 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:28.923 15:39:39 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:28.923 15:39:39 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:28.923 15:39:39 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.923 15:39:39 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:28.923 15:39:39 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.923 15:39:39 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:28.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.923 --rc genhtml_branch_coverage=1 00:05:28.923 --rc genhtml_function_coverage=1 00:05:28.923 --rc genhtml_legend=1 00:05:28.923 --rc geninfo_all_blocks=1 00:05:28.923 --rc geninfo_unexecuted_blocks=1 00:05:28.923 00:05:28.923 ' 00:05:28.923 15:39:39 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:28.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.924 --rc genhtml_branch_coverage=1 00:05:28.924 --rc genhtml_function_coverage=1 00:05:28.924 --rc genhtml_legend=1 00:05:28.924 --rc geninfo_all_blocks=1 00:05:28.924 --rc geninfo_unexecuted_blocks=1 00:05:28.924 00:05:28.924 ' 00:05:28.924 15:39:39 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:28.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.924 --rc genhtml_branch_coverage=1 00:05:28.924 --rc genhtml_function_coverage=1 00:05:28.924 --rc genhtml_legend=1 00:05:28.924 --rc geninfo_all_blocks=1 00:05:28.924 --rc geninfo_unexecuted_blocks=1 00:05:28.924 00:05:28.924 ' 00:05:28.924 15:39:39 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:28.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.924 --rc genhtml_branch_coverage=1 00:05:28.924 --rc genhtml_function_coverage=1 00:05:28.924 --rc genhtml_legend=1 00:05:28.924 --rc geninfo_all_blocks=1 00:05:28.924 --rc geninfo_unexecuted_blocks=1 00:05:28.924 00:05:28.924 ' 00:05:28.924 15:39:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:28.924 15:39:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2245532 00:05:28.924 15:39:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.924 15:39:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:28.924 15:39:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2245532 00:05:28.924 15:39:39 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2245532 ']' 00:05:28.924 15:39:39 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.924 15:39:39 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.924 15:39:39 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.924 15:39:39 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.924 15:39:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.181 [2024-10-01 15:39:39.148463] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:29.181 [2024-10-01 15:39:39.148511] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245532 ] 00:05:29.181 [2024-10-01 15:39:39.216570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.181 [2024-10-01 15:39:39.289090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.181 [2024-10-01 15:39:39.289179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.181 [2024-10-01 15:39:39.289285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.181 [2024-10-01 15:39:39.289285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.115 15:39:39 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.115 15:39:39 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:30.115 15:39:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:30.115 15:39:39 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.115 15:39:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.115 [2024-10-01 15:39:39.995738] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:30.115 [2024-10-01 15:39:39.995757] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:30.115 [2024-10-01 15:39:39.995766] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:30.115 [2024-10-01 15:39:39.995772] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:30.115 [2024-10-01 15:39:39.995777] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:30.115 15:39:39 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.115 15:39:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:30.115 15:39:40 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.115 15:39:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.115 [2024-10-01 15:39:40.071330] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:30.115 15:39:40 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.115 15:39:40 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:30.115 15:39:40 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.115 15:39:40 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.115 15:39:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.115 ************************************ 00:05:30.115 START TEST scheduler_create_thread 00:05:30.115 ************************************ 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.115 2 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.115 3 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.115 4 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.115 5 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.115 6 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.115 7 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.115 8 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.115 9 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.115 10 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:30.115 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.116 15:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.051 15:39:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.051 15:39:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:31.051 15:39:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.051 15:39:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.423 15:39:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.423 15:39:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:32.423 15:39:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:32.423 15:39:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.423 15:39:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.354 15:39:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.354 00:05:33.354 real 0m3.383s 00:05:33.354 user 0m0.023s 00:05:33.354 sys 0m0.006s 00:05:33.354 15:39:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.354 15:39:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.354 ************************************ 00:05:33.354 END TEST scheduler_create_thread 00:05:33.354 ************************************ 00:05:33.354 15:39:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:33.354 15:39:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2245532 00:05:33.354 15:39:43 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2245532 ']' 00:05:33.354 15:39:43 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2245532 00:05:33.354 15:39:43 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:33.354 15:39:43 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.354 15:39:43 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2245532 00:05:33.611 15:39:43 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:33.611 15:39:43 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:33.611 15:39:43 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2245532' 00:05:33.611 killing process with pid 2245532 00:05:33.611 15:39:43 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2245532 00:05:33.611 15:39:43 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2245532 00:05:33.870 [2024-10-01 15:39:43.863342] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.127 00:05:34.127 real 0m5.172s 00:05:34.127 user 0m10.592s 00:05:34.127 sys 0m0.421s 00:05:34.127 15:39:44 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.127 15:39:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.127 ************************************ 00:05:34.127 END TEST event_scheduler 00:05:34.127 ************************************ 00:05:34.127 15:39:44 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:34.127 15:39:44 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:34.127 15:39:44 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.127 15:39:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.127 15:39:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.127 ************************************ 00:05:34.127 START TEST app_repeat 00:05:34.127 ************************************ 00:05:34.127 15:39:44 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:34.127 15:39:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.127 15:39:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.127 15:39:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:34.127 15:39:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.127 15:39:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:34.127 15:39:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:34.127 15:39:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:34.127 15:39:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2246501 00:05:34.127 15:39:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.127 15:39:44 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:34.127 15:39:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2246501' 00:05:34.127 Process app_repeat pid: 2246501 00:05:34.127 15:39:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.127 15:39:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:34.127 spdk_app_start Round 0 00:05:34.127 15:39:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2246501 /var/tmp/spdk-nbd.sock 00:05:34.127 15:39:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2246501 ']' 00:05:34.127 15:39:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.127 15:39:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.127 15:39:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.127 15:39:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.127 15:39:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.127 [2024-10-01 15:39:44.207572] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:34.127 [2024-10-01 15:39:44.207622] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2246501 ] 00:05:34.127 [2024-10-01 15:39:44.275146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.384 [2024-10-01 15:39:44.353687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.384 [2024-10-01 15:39:44.353688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.948 15:39:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.948 15:39:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:34.948 15:39:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.205 Malloc0 00:05:35.205 15:39:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.462 Malloc1 00:05:35.462 15:39:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.462 15:39:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.720 /dev/nbd0 00:05:35.720 15:39:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.720 15:39:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.720 15:39:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:35.720 15:39:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:35.720 15:39:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:35.720 15:39:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:35.720 15:39:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:35.721 15:39:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:35.721 15:39:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:35.721 15:39:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:35.721 15:39:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.721 1+0 records in 00:05:35.721 1+0 records out 00:05:35.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00015235 s, 26.9 MB/s 00:05:35.721 15:39:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.721 15:39:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:35.721 15:39:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.721 15:39:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:35.721 15:39:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:35.721 15:39:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.721 15:39:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.721 15:39:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.979 /dev/nbd1 00:05:35.979 15:39:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.979 15:39:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.979 15:39:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:35.979 15:39:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:35.979 15:39:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:35.979 15:39:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:35.979 15:39:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:35.979 15:39:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:35.979 15:39:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:35.979 15:39:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:35.979 15:39:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.979 1+0 records in 00:05:35.979 1+0 records out 00:05:35.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0031396 s, 1.3 MB/s 00:05:35.979 15:39:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.979 15:39:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:35.979 15:39:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.979 15:39:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:35.979 15:39:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:35.979 15:39:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.979 15:39:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.979 15:39:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.979 15:39:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.979 15:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.236 15:39:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.236 { 00:05:36.237 "nbd_device": "/dev/nbd0", 00:05:36.237 "bdev_name": "Malloc0" 00:05:36.237 }, 00:05:36.237 { 00:05:36.237 "nbd_device": "/dev/nbd1", 00:05:36.237 "bdev_name": "Malloc1" 00:05:36.237 } 00:05:36.237 ]' 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.237 { 00:05:36.237 "nbd_device": "/dev/nbd0", 00:05:36.237 "bdev_name": "Malloc0" 00:05:36.237 }, 00:05:36.237 { 00:05:36.237 "nbd_device": "/dev/nbd1", 00:05:36.237 "bdev_name": "Malloc1" 00:05:36.237 } 00:05:36.237 ]' 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.237 /dev/nbd1' 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.237 /dev/nbd1' 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.237 256+0 records in 00:05:36.237 256+0 records out 00:05:36.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101084 s, 104 MB/s 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.237 256+0 records in 00:05:36.237 256+0 records out 00:05:36.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142722 s, 73.5 MB/s 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.237 256+0 records in 00:05:36.237 256+0 records out 00:05:36.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148872 s, 70.4 MB/s 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.237 15:39:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.495 15:39:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.495 15:39:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.495 15:39:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.495 15:39:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.495 15:39:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.495 15:39:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.495 15:39:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.495 15:39:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.495 15:39:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.495 15:39:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.753 15:39:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.753 15:39:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.753 15:39:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.753 15:39:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.753 15:39:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.753 15:39:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.753 15:39:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.753 15:39:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.753 15:39:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.753 15:39:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.753 15:39:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.753 15:39:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.753 15:39:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.753 15:39:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.011 15:39:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.011 15:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.011 15:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.011 15:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.011 15:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.011 15:39:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.011 15:39:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.011 15:39:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.011 15:39:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.011 15:39:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.011 15:39:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.269 [2024-10-01 15:39:47.341419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.269 [2024-10-01 15:39:47.407496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.269 [2024-10-01 15:39:47.407498] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.269 [2024-10-01 15:39:47.448032] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.269 [2024-10-01 15:39:47.448070] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.549 15:39:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.549 15:39:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:40.549 spdk_app_start Round 1 00:05:40.549 15:39:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2246501 /var/tmp/spdk-nbd.sock 00:05:40.549 15:39:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2246501 ']' 00:05:40.549 15:39:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.550 15:39:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.550 15:39:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.550 15:39:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.550 15:39:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.550 15:39:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.550 15:39:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:40.550 15:39:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.550 Malloc0 00:05:40.550 15:39:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.807 Malloc1 00:05:40.807 15:39:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.807 15:39:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.807 15:39:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.807 15:39:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.807 15:39:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.807 15:39:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.807 15:39:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.808 15:39:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.808 15:39:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.808 15:39:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.808 15:39:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.808 15:39:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.808 15:39:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.808 15:39:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.808 15:39:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.808 15:39:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.066 /dev/nbd0 00:05:41.066 15:39:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.066 15:39:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.066 15:39:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:41.066 15:39:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:41.066 15:39:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:41.066 15:39:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:41.066 15:39:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:41.066 15:39:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:41.066 15:39:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:41.066 15:39:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:41.066 15:39:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.066 1+0 records in 00:05:41.066 1+0 records out 00:05:41.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194909 s, 21.0 MB/s 00:05:41.066 15:39:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.066 15:39:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:41.066 15:39:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.066 15:39:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:41.066 15:39:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:41.066 15:39:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.066 15:39:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.066 15:39:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.325 /dev/nbd1 00:05:41.325 15:39:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.325 15:39:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.325 15:39:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:41.325 15:39:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:41.325 15:39:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:41.325 15:39:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:41.325 15:39:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:41.325 15:39:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:41.325 15:39:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:41.325 15:39:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:41.325 15:39:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.325 1+0 records in 00:05:41.325 1+0 records out 00:05:41.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195408 s, 21.0 MB/s 00:05:41.325 15:39:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.325 15:39:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:41.325 15:39:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.325 15:39:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:41.325 15:39:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:41.325 15:39:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.325 15:39:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.325 15:39:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.325 15:39:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.325 15:39:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.325 15:39:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.325 { 00:05:41.325 "nbd_device": "/dev/nbd0", 00:05:41.325 "bdev_name": "Malloc0" 00:05:41.325 }, 00:05:41.325 { 00:05:41.325 "nbd_device": "/dev/nbd1", 00:05:41.325 "bdev_name": "Malloc1" 00:05:41.325 } 00:05:41.325 ]' 00:05:41.325 15:39:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.325 { 00:05:41.325 "nbd_device": "/dev/nbd0", 00:05:41.325 "bdev_name": "Malloc0" 00:05:41.325 }, 00:05:41.325 { 00:05:41.325 "nbd_device": "/dev/nbd1", 00:05:41.325 "bdev_name": "Malloc1" 00:05:41.325 } 00:05:41.325 ]' 00:05:41.325 15:39:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.584 /dev/nbd1' 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.584 /dev/nbd1' 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.584 256+0 records in 00:05:41.584 256+0 records out 00:05:41.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100916 s, 104 MB/s 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.584 256+0 records in 00:05:41.584 256+0 records out 00:05:41.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133624 s, 78.5 MB/s 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.584 256+0 records in 00:05:41.584 256+0 records out 00:05:41.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144928 s, 72.4 MB/s 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.584 15:39:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.842 15:39:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.842 15:39:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.842 15:39:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.842 15:39:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.842 15:39:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.842 15:39:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.842 15:39:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.842 15:39:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.842 15:39:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.842 15:39:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.100 15:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.358 15:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.358 15:39:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.358 15:39:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.358 15:39:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.358 15:39:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.358 15:39:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.358 15:39:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.358 15:39:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.617 [2024-10-01 15:39:52.676180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.617 [2024-10-01 15:39:52.742405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.617 [2024-10-01 15:39:52.742405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.617 [2024-10-01 15:39:52.783991] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.617 [2024-10-01 15:39:52.784030] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.905 15:39:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.905 15:39:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:45.905 spdk_app_start Round 2 00:05:45.905 15:39:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2246501 /var/tmp/spdk-nbd.sock 00:05:45.905 15:39:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2246501 ']' 00:05:45.905 15:39:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.905 15:39:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.905 15:39:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.905 15:39:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.905 15:39:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.905 15:39:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.905 15:39:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:45.905 15:39:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.905 Malloc0 00:05:45.905 15:39:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.164 Malloc1 00:05:46.164 15:39:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.164 15:39:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.423 /dev/nbd0 00:05:46.423 15:39:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.423 15:39:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.423 15:39:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:46.423 15:39:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:46.423 15:39:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.423 15:39:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.423 15:39:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:46.423 15:39:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:46.423 15:39:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.424 15:39:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.424 15:39:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.424 1+0 records in 00:05:46.424 1+0 records out 00:05:46.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002417 s, 16.9 MB/s 00:05:46.424 15:39:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.424 15:39:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:46.424 15:39:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.424 15:39:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.424 15:39:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:46.424 15:39:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.424 15:39:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.424 15:39:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.682 /dev/nbd1 00:05:46.682 15:39:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.682 15:39:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.682 15:39:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:46.682 15:39:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:46.682 15:39:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.682 15:39:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.682 15:39:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:46.682 15:39:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:46.682 15:39:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.682 15:39:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.682 15:39:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.682 1+0 records in 00:05:46.682 1+0 records out 00:05:46.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217751 s, 18.8 MB/s 00:05:46.682 15:39:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.682 15:39:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:46.682 15:39:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.682 15:39:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.682 15:39:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:46.682 15:39:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.682 15:39:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.682 15:39:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.682 15:39:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.682 15:39:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.941 { 00:05:46.941 "nbd_device": "/dev/nbd0", 00:05:46.941 "bdev_name": "Malloc0" 00:05:46.941 }, 00:05:46.941 { 00:05:46.941 "nbd_device": "/dev/nbd1", 00:05:46.941 "bdev_name": "Malloc1" 00:05:46.941 } 00:05:46.941 ]' 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.941 { 00:05:46.941 "nbd_device": "/dev/nbd0", 00:05:46.941 "bdev_name": "Malloc0" 00:05:46.941 }, 00:05:46.941 { 00:05:46.941 "nbd_device": "/dev/nbd1", 00:05:46.941 "bdev_name": "Malloc1" 00:05:46.941 } 00:05:46.941 ]' 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.941 /dev/nbd1' 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.941 /dev/nbd1' 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.941 15:39:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.941 256+0 records in 00:05:46.941 256+0 records out 00:05:46.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106752 s, 98.2 MB/s 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.941 256+0 records in 00:05:46.941 256+0 records out 00:05:46.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148263 s, 70.7 MB/s 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.941 256+0 records in 00:05:46.941 256+0 records out 00:05:46.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150235 s, 69.8 MB/s 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.941 15:39:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.200 15:39:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.200 15:39:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.200 15:39:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.200 15:39:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.200 15:39:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.200 15:39:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.200 15:39:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.200 15:39:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.200 15:39:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.200 15:39:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.459 15:39:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.459 15:39:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.459 15:39:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.459 15:39:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.459 15:39:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.459 15:39:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.459 15:39:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.459 15:39:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.459 15:39:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.459 15:39:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.459 15:39:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.718 15:39:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.718 15:39:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.718 15:39:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.718 15:39:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.718 15:39:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.718 15:39:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.718 15:39:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.718 15:39:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.718 15:39:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.718 15:39:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.718 15:39:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.718 15:39:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.718 15:39:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.977 15:39:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.977 [2024-10-01 15:39:58.115222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.236 [2024-10-01 15:39:58.183654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.236 [2024-10-01 15:39:58.183655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.236 [2024-10-01 15:39:58.224364] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.236 [2024-10-01 15:39:58.224403] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.876 15:40:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2246501 /var/tmp/spdk-nbd.sock 00:05:50.876 15:40:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2246501 ']' 00:05:50.876 15:40:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.876 15:40:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.876 15:40:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.876 15:40:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.876 15:40:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.141 15:40:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.141 15:40:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:51.141 15:40:01 event.app_repeat -- event/event.sh@39 -- # killprocess 2246501 00:05:51.142 15:40:01 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2246501 ']' 00:05:51.142 15:40:01 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2246501 00:05:51.142 15:40:01 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:51.142 15:40:01 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.142 15:40:01 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2246501 00:05:51.142 15:40:01 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.142 15:40:01 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.142 15:40:01 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2246501' 00:05:51.142 killing process with pid 2246501 00:05:51.142 15:40:01 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2246501 00:05:51.142 15:40:01 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2246501 00:05:51.400 spdk_app_start is called in Round 0. 00:05:51.400 Shutdown signal received, stop current app iteration 00:05:51.400 Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 reinitialization... 00:05:51.400 spdk_app_start is called in Round 1. 00:05:51.400 Shutdown signal received, stop current app iteration 00:05:51.400 Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 reinitialization... 00:05:51.400 spdk_app_start is called in Round 2. 00:05:51.400 Shutdown signal received, stop current app iteration 00:05:51.400 Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 reinitialization... 00:05:51.400 spdk_app_start is called in Round 3. 00:05:51.400 Shutdown signal received, stop current app iteration 00:05:51.400 15:40:01 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:51.400 15:40:01 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:51.400 00:05:51.400 real 0m17.186s 00:05:51.400 user 0m37.577s 00:05:51.400 sys 0m2.627s 00:05:51.400 15:40:01 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.400 15:40:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.400 ************************************ 00:05:51.400 END TEST app_repeat 00:05:51.400 ************************************ 00:05:51.400 15:40:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:51.400 15:40:01 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:51.400 15:40:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.400 15:40:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.400 15:40:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.400 ************************************ 00:05:51.400 START TEST cpu_locks 00:05:51.400 ************************************ 00:05:51.400 15:40:01 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:51.400 * Looking for test storage... 00:05:51.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:51.400 15:40:01 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:51.400 15:40:01 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:51.400 15:40:01 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:51.400 15:40:01 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:51.400 15:40:01 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.400 15:40:01 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.400 15:40:01 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.400 15:40:01 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.400 15:40:01 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.400 15:40:01 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.400 15:40:01 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.400 15:40:01 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.400 15:40:01 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.400 15:40:01 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.400 15:40:01 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.659 15:40:01 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:51.659 15:40:01 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.659 15:40:01 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:51.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.659 --rc genhtml_branch_coverage=1 00:05:51.659 --rc genhtml_function_coverage=1 00:05:51.659 --rc genhtml_legend=1 00:05:51.659 --rc geninfo_all_blocks=1 00:05:51.659 --rc geninfo_unexecuted_blocks=1 00:05:51.659 00:05:51.659 ' 00:05:51.659 15:40:01 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:51.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.659 --rc genhtml_branch_coverage=1 00:05:51.659 --rc genhtml_function_coverage=1 00:05:51.659 --rc genhtml_legend=1 00:05:51.659 --rc geninfo_all_blocks=1 00:05:51.659 --rc geninfo_unexecuted_blocks=1 00:05:51.659 00:05:51.659 ' 00:05:51.659 15:40:01 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:51.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.659 --rc genhtml_branch_coverage=1 00:05:51.659 --rc genhtml_function_coverage=1 00:05:51.659 --rc genhtml_legend=1 00:05:51.659 --rc geninfo_all_blocks=1 00:05:51.659 --rc geninfo_unexecuted_blocks=1 00:05:51.659 00:05:51.659 ' 00:05:51.659 15:40:01 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:51.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.659 --rc genhtml_branch_coverage=1 00:05:51.659 --rc genhtml_function_coverage=1 00:05:51.659 --rc genhtml_legend=1 00:05:51.659 --rc geninfo_all_blocks=1 00:05:51.659 --rc geninfo_unexecuted_blocks=1 00:05:51.659 00:05:51.659 ' 00:05:51.659 15:40:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.659 15:40:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.659 15:40:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.659 15:40:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.659 15:40:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.659 15:40:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.659 15:40:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.659 ************************************ 00:05:51.659 START TEST default_locks 00:05:51.659 ************************************ 00:05:51.659 15:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:51.659 15:40:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2249600 00:05:51.659 15:40:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2249600 00:05:51.659 15:40:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.659 15:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2249600 ']' 00:05:51.659 15:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.659 15:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.659 15:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.659 15:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.659 15:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.659 [2024-10-01 15:40:01.690563] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:51.659 [2024-10-01 15:40:01.690603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249600 ] 00:05:51.659 [2024-10-01 15:40:01.759560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.659 [2024-10-01 15:40:01.839300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.593 15:40:02 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.593 15:40:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:52.593 15:40:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2249600 00:05:52.593 15:40:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2249600 00:05:52.593 15:40:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.852 lslocks: write error 00:05:52.852 15:40:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2249600 00:05:52.852 15:40:02 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2249600 ']' 00:05:52.852 15:40:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2249600 00:05:52.852 15:40:02 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:52.852 15:40:02 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.852 15:40:02 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2249600 00:05:52.852 15:40:02 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.852 15:40:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.852 15:40:02 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2249600' 00:05:52.852 killing process with pid 2249600 00:05:52.852 15:40:02 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2249600 00:05:52.852 15:40:02 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2249600 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2249600 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2249600 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2249600 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2249600 ']' 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2249600) - No such process 00:05:53.111 ERROR: process (pid: 2249600) is no longer running 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.111 00:05:53.111 real 0m1.643s 00:05:53.111 user 0m1.726s 00:05:53.111 sys 0m0.567s 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.111 15:40:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.111 ************************************ 00:05:53.111 END TEST default_locks 00:05:53.111 ************************************ 00:05:53.370 15:40:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:53.370 15:40:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.370 15:40:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.370 15:40:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.370 ************************************ 00:05:53.370 START TEST default_locks_via_rpc 00:05:53.370 ************************************ 00:05:53.370 15:40:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:53.370 15:40:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2249995 00:05:53.370 15:40:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2249995 00:05:53.370 15:40:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.370 15:40:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2249995 ']' 00:05:53.370 15:40:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.370 15:40:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.370 15:40:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.370 15:40:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.370 15:40:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.370 [2024-10-01 15:40:03.406990] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:53.370 [2024-10-01 15:40:03.407037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249995 ] 00:05:53.370 [2024-10-01 15:40:03.472562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.370 [2024-10-01 15:40:03.541625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.302 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.302 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:54.302 15:40:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:54.302 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.302 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.302 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.302 15:40:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:54.302 15:40:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:54.302 15:40:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:54.302 15:40:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:54.302 15:40:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:54.302 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.302 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.302 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.303 15:40:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2249995 00:05:54.303 15:40:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2249995 00:05:54.303 15:40:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.560 15:40:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2249995 00:05:54.560 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2249995 ']' 00:05:54.560 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2249995 00:05:54.560 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:54.560 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.560 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2249995 00:05:54.818 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.818 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.818 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2249995' 00:05:54.818 killing process with pid 2249995 00:05:54.818 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2249995 00:05:54.818 15:40:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2249995 00:05:55.078 00:05:55.078 real 0m1.762s 00:05:55.078 user 0m1.855s 00:05:55.078 sys 0m0.605s 00:05:55.078 15:40:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.078 15:40:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.078 ************************************ 00:05:55.078 END TEST default_locks_via_rpc 00:05:55.078 ************************************ 00:05:55.078 15:40:05 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:55.078 15:40:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.078 15:40:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.078 15:40:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.078 ************************************ 00:05:55.078 START TEST non_locking_app_on_locked_coremask 00:05:55.078 ************************************ 00:05:55.078 15:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:55.078 15:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2250267 00:05:55.078 15:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2250267 /var/tmp/spdk.sock 00:05:55.078 15:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.078 15:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2250267 ']' 00:05:55.078 15:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.078 15:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.078 15:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.078 15:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.078 15:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.078 [2024-10-01 15:40:05.237402] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:55.078 [2024-10-01 15:40:05.237443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2250267 ] 00:05:55.337 [2024-10-01 15:40:05.305829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.337 [2024-10-01 15:40:05.384470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.903 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.903 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:55.903 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2250492 00:05:55.903 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2250492 /var/tmp/spdk2.sock 00:05:55.903 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:55.903 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2250492 ']' 00:05:55.903 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.903 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.903 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.903 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.903 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.161 [2024-10-01 15:40:06.109211] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:56.161 [2024-10-01 15:40:06.109259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2250492 ] 00:05:56.161 [2024-10-01 15:40:06.184048] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.161 [2024-10-01 15:40:06.184073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.161 [2024-10-01 15:40:06.335792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.094 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.094 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:57.094 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2250267 00:05:57.094 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2250267 00:05:57.094 15:40:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.352 lslocks: write error 00:05:57.352 15:40:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2250267 00:05:57.352 15:40:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2250267 ']' 00:05:57.352 15:40:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2250267 00:05:57.352 15:40:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:57.352 15:40:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.352 15:40:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2250267 00:05:57.610 15:40:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.610 15:40:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.610 15:40:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2250267' 00:05:57.610 killing process with pid 2250267 00:05:57.610 15:40:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2250267 00:05:57.610 15:40:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2250267 00:05:58.178 15:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2250492 00:05:58.178 15:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2250492 ']' 00:05:58.178 15:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2250492 00:05:58.178 15:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:58.178 15:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.178 15:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2250492 00:05:58.178 15:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.178 15:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.178 15:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2250492' 00:05:58.178 killing process with pid 2250492 00:05:58.178 15:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2250492 00:05:58.178 15:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2250492 00:05:58.437 00:05:58.437 real 0m3.412s 00:05:58.437 user 0m3.677s 00:05:58.437 sys 0m1.018s 00:05:58.437 15:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.437 15:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.437 ************************************ 00:05:58.437 END TEST non_locking_app_on_locked_coremask 00:05:58.437 ************************************ 00:05:58.696 15:40:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:58.696 15:40:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.696 15:40:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.696 15:40:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.696 ************************************ 00:05:58.696 START TEST locking_app_on_unlocked_coremask 00:05:58.696 ************************************ 00:05:58.696 15:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:58.696 15:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2250924 00:05:58.696 15:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2250924 /var/tmp/spdk.sock 00:05:58.696 15:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:58.696 15:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2250924 ']' 00:05:58.696 15:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.696 15:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.696 15:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.696 15:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.696 15:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.696 [2024-10-01 15:40:08.717264] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:58.696 [2024-10-01 15:40:08.717306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2250924 ] 00:05:58.696 [2024-10-01 15:40:08.786519] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.696 [2024-10-01 15:40:08.786544] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.696 [2024-10-01 15:40:08.865622] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.638 15:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.638 15:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:59.638 15:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.638 15:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2251001 00:05:59.638 15:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2251001 /var/tmp/spdk2.sock 00:05:59.638 15:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2251001 ']' 00:05:59.638 15:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.638 15:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.638 15:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.638 15:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.638 15:40:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.638 [2024-10-01 15:40:09.569958] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:05:59.638 [2024-10-01 15:40:09.570003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251001 ] 00:05:59.638 [2024-10-01 15:40:09.644090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.638 [2024-10-01 15:40:09.783261] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.573 15:40:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.573 15:40:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:00.573 15:40:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2251001 00:06:00.573 15:40:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2251001 00:06:00.573 15:40:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.831 lslocks: write error 00:06:00.831 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2250924 00:06:00.831 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2250924 ']' 00:06:00.831 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2250924 00:06:00.831 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:00.831 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.831 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2250924 00:06:01.089 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.089 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.089 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2250924' 00:06:01.089 killing process with pid 2250924 00:06:01.089 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2250924 00:06:01.089 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2250924 00:06:01.656 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2251001 00:06:01.656 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2251001 ']' 00:06:01.656 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2251001 00:06:01.656 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:01.656 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.656 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2251001 00:06:01.656 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.656 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.656 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2251001' 00:06:01.656 killing process with pid 2251001 00:06:01.656 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2251001 00:06:01.656 15:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2251001 00:06:01.915 00:06:01.915 real 0m3.428s 00:06:01.915 user 0m3.685s 00:06:01.915 sys 0m1.021s 00:06:01.915 15:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.915 15:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.915 ************************************ 00:06:01.915 END TEST locking_app_on_unlocked_coremask 00:06:01.915 ************************************ 00:06:02.174 15:40:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:02.174 15:40:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.174 15:40:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.174 15:40:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.174 ************************************ 00:06:02.174 START TEST locking_app_on_locked_coremask 00:06:02.174 ************************************ 00:06:02.174 15:40:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:02.174 15:40:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2251498 00:06:02.174 15:40:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2251498 /var/tmp/spdk.sock 00:06:02.174 15:40:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.174 15:40:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2251498 ']' 00:06:02.174 15:40:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.174 15:40:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.174 15:40:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.174 15:40:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.174 15:40:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.174 [2024-10-01 15:40:12.216882] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:02.174 [2024-10-01 15:40:12.216928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251498 ] 00:06:02.174 [2024-10-01 15:40:12.286303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.174 [2024-10-01 15:40:12.354594] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2251719 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2251719 /var/tmp/spdk2.sock 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2251719 /var/tmp/spdk2.sock 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2251719 /var/tmp/spdk2.sock 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2251719 ']' 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.107 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.107 [2024-10-01 15:40:13.105334] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:03.107 [2024-10-01 15:40:13.105384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251719 ] 00:06:03.107 [2024-10-01 15:40:13.181529] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2251498 has claimed it. 00:06:03.107 [2024-10-01 15:40:13.181563] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2251719) - No such process 00:06:03.673 ERROR: process (pid: 2251719) is no longer running 00:06:03.673 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.673 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:03.673 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:03.673 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.673 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.673 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.673 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2251498 00:06:03.673 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2251498 00:06:03.673 15:40:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.241 lslocks: write error 00:06:04.241 15:40:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2251498 00:06:04.241 15:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2251498 ']' 00:06:04.241 15:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2251498 00:06:04.241 15:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:04.241 15:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.241 15:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2251498 00:06:04.241 15:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.241 15:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.241 15:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2251498' 00:06:04.241 killing process with pid 2251498 00:06:04.241 15:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2251498 00:06:04.241 15:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2251498 00:06:04.500 00:06:04.500 real 0m2.463s 00:06:04.500 user 0m2.759s 00:06:04.500 sys 0m0.680s 00:06:04.500 15:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.500 15:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.500 ************************************ 00:06:04.500 END TEST locking_app_on_locked_coremask 00:06:04.500 ************************************ 00:06:04.500 15:40:14 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:04.500 15:40:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.500 15:40:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.500 15:40:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.758 ************************************ 00:06:04.758 START TEST locking_overlapped_coremask 00:06:04.758 ************************************ 00:06:04.758 15:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:04.758 15:40:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2251989 00:06:04.758 15:40:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2251989 /var/tmp/spdk.sock 00:06:04.758 15:40:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:04.758 15:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2251989 ']' 00:06:04.758 15:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.758 15:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.758 15:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.758 15:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.758 15:40:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.758 [2024-10-01 15:40:14.744229] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:04.758 [2024-10-01 15:40:14.744264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251989 ] 00:06:04.759 [2024-10-01 15:40:14.812138] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.759 [2024-10-01 15:40:14.885184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.759 [2024-10-01 15:40:14.885291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.759 [2024-10-01 15:40:14.885291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.688 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.688 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:05.688 15:40:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2252157 00:06:05.688 15:40:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2252157 /var/tmp/spdk2.sock 00:06:05.689 15:40:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:05.689 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:05.689 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2252157 /var/tmp/spdk2.sock 00:06:05.689 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:05.689 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.689 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:05.689 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.689 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2252157 /var/tmp/spdk2.sock 00:06:05.689 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2252157 ']' 00:06:05.689 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.689 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.689 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.689 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.689 15:40:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.689 [2024-10-01 15:40:15.635584] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:05.689 [2024-10-01 15:40:15.635632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252157 ] 00:06:05.689 [2024-10-01 15:40:15.712104] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2251989 has claimed it. 00:06:05.689 [2024-10-01 15:40:15.712134] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2252157) - No such process 00:06:06.254 ERROR: process (pid: 2252157) is no longer running 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2251989 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2251989 ']' 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2251989 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2251989 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2251989' 00:06:06.254 killing process with pid 2251989 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2251989 00:06:06.254 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2251989 00:06:06.515 00:06:06.515 real 0m1.976s 00:06:06.515 user 0m5.624s 00:06:06.515 sys 0m0.436s 00:06:06.515 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.515 15:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.515 ************************************ 00:06:06.515 END TEST locking_overlapped_coremask 00:06:06.515 ************************************ 00:06:06.515 15:40:16 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:06.515 15:40:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.515 15:40:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.515 15:40:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.773 ************************************ 00:06:06.773 START TEST locking_overlapped_coremask_via_rpc 00:06:06.773 ************************************ 00:06:06.773 15:40:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:06.773 15:40:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:06.773 15:40:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2252281 00:06:06.773 15:40:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2252281 /var/tmp/spdk.sock 00:06:06.773 15:40:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2252281 ']' 00:06:06.773 15:40:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.773 15:40:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.773 15:40:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.773 15:40:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.773 15:40:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.774 [2024-10-01 15:40:16.779612] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:06.774 [2024-10-01 15:40:16.779651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252281 ] 00:06:06.774 [2024-10-01 15:40:16.848159] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.774 [2024-10-01 15:40:16.848186] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.774 [2024-10-01 15:40:16.928754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.774 [2024-10-01 15:40:16.928868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.774 [2024-10-01 15:40:16.928877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.706 15:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.706 15:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:07.706 15:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2252492 00:06:07.706 15:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2252492 /var/tmp/spdk2.sock 00:06:07.706 15:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2252492 ']' 00:06:07.706 15:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:07.706 15:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.706 15:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.706 15:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.706 15:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.706 15:40:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.706 [2024-10-01 15:40:17.663483] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:07.706 [2024-10-01 15:40:17.663534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252492 ] 00:06:07.706 [2024-10-01 15:40:17.737744] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.706 [2024-10-01 15:40:17.737770] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.706 [2024-10-01 15:40:17.887964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.706 [2024-10-01 15:40:17.888080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.706 [2024-10-01 15:40:17.888080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.638 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.639 [2024-10-01 15:40:18.520937] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2252281 has claimed it. 00:06:08.639 request: 00:06:08.639 { 00:06:08.639 "method": "framework_enable_cpumask_locks", 00:06:08.639 "req_id": 1 00:06:08.639 } 00:06:08.639 Got JSON-RPC error response 00:06:08.639 response: 00:06:08.639 { 00:06:08.639 "code": -32603, 00:06:08.639 "message": "Failed to claim CPU core: 2" 00:06:08.639 } 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2252281 /var/tmp/spdk.sock 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2252281 ']' 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2252492 /var/tmp/spdk2.sock 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2252492 ']' 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.639 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.897 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.897 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:08.897 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:08.897 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.897 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.897 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.897 00:06:08.897 real 0m2.194s 00:06:08.897 user 0m0.955s 00:06:08.897 sys 0m0.173s 00:06:08.897 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.897 15:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.897 ************************************ 00:06:08.897 END TEST locking_overlapped_coremask_via_rpc 00:06:08.897 ************************************ 00:06:08.897 15:40:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:08.897 15:40:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2252281 ]] 00:06:08.897 15:40:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2252281 00:06:08.897 15:40:18 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2252281 ']' 00:06:08.897 15:40:18 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2252281 00:06:08.897 15:40:18 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:08.897 15:40:18 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.897 15:40:18 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2252281 00:06:08.897 15:40:19 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.897 15:40:19 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.897 15:40:19 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2252281' 00:06:08.897 killing process with pid 2252281 00:06:08.897 15:40:19 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2252281 00:06:08.897 15:40:19 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2252281 00:06:09.462 15:40:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2252492 ]] 00:06:09.462 15:40:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2252492 00:06:09.462 15:40:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2252492 ']' 00:06:09.462 15:40:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2252492 00:06:09.462 15:40:19 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:09.462 15:40:19 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.462 15:40:19 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2252492 00:06:09.462 15:40:19 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:09.462 15:40:19 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:09.462 15:40:19 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2252492' 00:06:09.462 killing process with pid 2252492 00:06:09.462 15:40:19 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2252492 00:06:09.462 15:40:19 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2252492 00:06:09.722 15:40:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.722 15:40:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:09.722 15:40:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2252281 ]] 00:06:09.722 15:40:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2252281 00:06:09.722 15:40:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2252281 ']' 00:06:09.722 15:40:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2252281 00:06:09.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2252281) - No such process 00:06:09.722 15:40:19 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2252281 is not found' 00:06:09.722 Process with pid 2252281 is not found 00:06:09.722 15:40:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2252492 ]] 00:06:09.722 15:40:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2252492 00:06:09.722 15:40:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2252492 ']' 00:06:09.722 15:40:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2252492 00:06:09.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2252492) - No such process 00:06:09.723 15:40:19 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2252492 is not found' 00:06:09.723 Process with pid 2252492 is not found 00:06:09.723 15:40:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.723 00:06:09.723 real 0m18.320s 00:06:09.723 user 0m31.435s 00:06:09.723 sys 0m5.464s 00:06:09.723 15:40:19 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.723 15:40:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.723 ************************************ 00:06:09.723 END TEST cpu_locks 00:06:09.723 ************************************ 00:06:09.723 00:06:09.723 real 0m45.005s 00:06:09.723 user 1m26.328s 00:06:09.723 sys 0m9.144s 00:06:09.723 15:40:19 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.723 15:40:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.723 ************************************ 00:06:09.723 END TEST event 00:06:09.723 ************************************ 00:06:09.723 15:40:19 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:09.723 15:40:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.723 15:40:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.723 15:40:19 -- common/autotest_common.sh@10 -- # set +x 00:06:09.723 ************************************ 00:06:09.723 START TEST thread 00:06:09.723 ************************************ 00:06:09.723 15:40:19 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:09.982 * Looking for test storage... 00:06:09.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:09.982 15:40:19 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.982 15:40:19 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.982 15:40:19 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:09.982 15:40:20 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:09.982 15:40:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.982 15:40:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.982 15:40:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.982 15:40:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.982 15:40:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.982 15:40:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.982 15:40:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.982 15:40:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.982 15:40:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.982 15:40:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.982 15:40:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.982 15:40:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:09.982 15:40:20 thread -- scripts/common.sh@345 -- # : 1 00:06:09.982 15:40:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.982 15:40:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.982 15:40:20 thread -- scripts/common.sh@365 -- # decimal 1 00:06:09.982 15:40:20 thread -- scripts/common.sh@353 -- # local d=1 00:06:09.982 15:40:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.982 15:40:20 thread -- scripts/common.sh@355 -- # echo 1 00:06:09.982 15:40:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.982 15:40:20 thread -- scripts/common.sh@366 -- # decimal 2 00:06:09.982 15:40:20 thread -- scripts/common.sh@353 -- # local d=2 00:06:09.982 15:40:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.982 15:40:20 thread -- scripts/common.sh@355 -- # echo 2 00:06:09.982 15:40:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.982 15:40:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.982 15:40:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.982 15:40:20 thread -- scripts/common.sh@368 -- # return 0 00:06:09.982 15:40:20 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.982 15:40:20 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.982 --rc genhtml_branch_coverage=1 00:06:09.982 --rc genhtml_function_coverage=1 00:06:09.982 --rc genhtml_legend=1 00:06:09.982 --rc geninfo_all_blocks=1 00:06:09.982 --rc geninfo_unexecuted_blocks=1 00:06:09.982 00:06:09.982 ' 00:06:09.982 15:40:20 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.982 --rc genhtml_branch_coverage=1 00:06:09.982 --rc genhtml_function_coverage=1 00:06:09.982 --rc genhtml_legend=1 00:06:09.982 --rc geninfo_all_blocks=1 00:06:09.982 --rc geninfo_unexecuted_blocks=1 00:06:09.982 00:06:09.982 ' 00:06:09.982 15:40:20 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.982 --rc genhtml_branch_coverage=1 00:06:09.982 --rc genhtml_function_coverage=1 00:06:09.982 --rc genhtml_legend=1 00:06:09.982 --rc geninfo_all_blocks=1 00:06:09.982 --rc geninfo_unexecuted_blocks=1 00:06:09.982 00:06:09.982 ' 00:06:09.982 15:40:20 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.982 --rc genhtml_branch_coverage=1 00:06:09.982 --rc genhtml_function_coverage=1 00:06:09.982 --rc genhtml_legend=1 00:06:09.982 --rc geninfo_all_blocks=1 00:06:09.982 --rc geninfo_unexecuted_blocks=1 00:06:09.982 00:06:09.982 ' 00:06:09.982 15:40:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.982 15:40:20 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:09.982 15:40:20 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.982 15:40:20 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.982 ************************************ 00:06:09.982 START TEST thread_poller_perf 00:06:09.982 ************************************ 00:06:09.982 15:40:20 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.982 [2024-10-01 15:40:20.092324] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:09.982 [2024-10-01 15:40:20.092401] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253054 ] 00:06:09.982 [2024-10-01 15:40:20.164239] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.242 [2024-10-01 15:40:20.238837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.242 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:11.176 ====================================== 00:06:11.176 busy:2107743162 (cyc) 00:06:11.176 total_run_count: 406000 00:06:11.176 tsc_hz: 2100000000 (cyc) 00:06:11.176 ====================================== 00:06:11.176 poller_cost: 5191 (cyc), 2471 (nsec) 00:06:11.176 00:06:11.176 real 0m1.243s 00:06:11.176 user 0m1.156s 00:06:11.176 sys 0m0.082s 00:06:11.176 15:40:21 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.176 15:40:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.176 ************************************ 00:06:11.176 END TEST thread_poller_perf 00:06:11.176 ************************************ 00:06:11.176 15:40:21 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.176 15:40:21 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:11.176 15:40:21 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.176 15:40:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.434 ************************************ 00:06:11.434 START TEST thread_poller_perf 00:06:11.434 ************************************ 00:06:11.434 15:40:21 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.434 [2024-10-01 15:40:21.407496] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:11.434 [2024-10-01 15:40:21.407566] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253311 ] 00:06:11.434 [2024-10-01 15:40:21.480187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.434 [2024-10-01 15:40:21.553740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.434 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:12.808 ====================================== 00:06:12.808 busy:2101533526 (cyc) 00:06:12.808 total_run_count: 5489000 00:06:12.808 tsc_hz: 2100000000 (cyc) 00:06:12.808 ====================================== 00:06:12.808 poller_cost: 382 (cyc), 181 (nsec) 00:06:12.808 00:06:12.808 real 0m1.238s 00:06:12.808 user 0m1.146s 00:06:12.808 sys 0m0.088s 00:06:12.808 15:40:22 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.808 15:40:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.808 ************************************ 00:06:12.808 END TEST thread_poller_perf 00:06:12.808 ************************************ 00:06:12.808 15:40:22 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:12.808 00:06:12.808 real 0m2.801s 00:06:12.808 user 0m2.462s 00:06:12.808 sys 0m0.352s 00:06:12.808 15:40:22 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.808 15:40:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.808 ************************************ 00:06:12.808 END TEST thread 00:06:12.808 ************************************ 00:06:12.808 15:40:22 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:12.808 15:40:22 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:12.808 15:40:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.808 15:40:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.808 15:40:22 -- common/autotest_common.sh@10 -- # set +x 00:06:12.808 ************************************ 00:06:12.808 START TEST app_cmdline 00:06:12.808 ************************************ 00:06:12.808 15:40:22 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:12.808 * Looking for test storage... 00:06:12.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:12.808 15:40:22 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:12.808 15:40:22 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:12.808 15:40:22 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:12.808 15:40:22 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.808 15:40:22 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:12.808 15:40:22 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.808 15:40:22 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:12.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.808 --rc genhtml_branch_coverage=1 00:06:12.808 --rc genhtml_function_coverage=1 00:06:12.808 --rc genhtml_legend=1 00:06:12.808 --rc geninfo_all_blocks=1 00:06:12.808 --rc geninfo_unexecuted_blocks=1 00:06:12.808 00:06:12.808 ' 00:06:12.808 15:40:22 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:12.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.808 --rc genhtml_branch_coverage=1 00:06:12.808 --rc genhtml_function_coverage=1 00:06:12.808 --rc genhtml_legend=1 00:06:12.808 --rc geninfo_all_blocks=1 00:06:12.808 --rc geninfo_unexecuted_blocks=1 00:06:12.808 00:06:12.808 ' 00:06:12.808 15:40:22 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:12.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.808 --rc genhtml_branch_coverage=1 00:06:12.808 --rc genhtml_function_coverage=1 00:06:12.808 --rc genhtml_legend=1 00:06:12.808 --rc geninfo_all_blocks=1 00:06:12.808 --rc geninfo_unexecuted_blocks=1 00:06:12.808 00:06:12.808 ' 00:06:12.808 15:40:22 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:12.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.808 --rc genhtml_branch_coverage=1 00:06:12.808 --rc genhtml_function_coverage=1 00:06:12.808 --rc genhtml_legend=1 00:06:12.808 --rc geninfo_all_blocks=1 00:06:12.808 --rc geninfo_unexecuted_blocks=1 00:06:12.808 00:06:12.808 ' 00:06:12.808 15:40:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:12.808 15:40:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2253607 00:06:12.808 15:40:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2253607 00:06:12.809 15:40:22 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:12.809 15:40:22 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2253607 ']' 00:06:12.809 15:40:22 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.809 15:40:22 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.809 15:40:22 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.809 15:40:22 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.809 15:40:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.809 [2024-10-01 15:40:22.959744] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:12.809 [2024-10-01 15:40:22.959792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253607 ] 00:06:13.067 [2024-10-01 15:40:23.028295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.068 [2024-10-01 15:40:23.107199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.633 15:40:23 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.633 15:40:23 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:13.633 15:40:23 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:13.892 { 00:06:13.892 "version": "SPDK v25.01-pre git sha1 3a41ae5b3", 00:06:13.892 "fields": { 00:06:13.892 "major": 25, 00:06:13.892 "minor": 1, 00:06:13.892 "patch": 0, 00:06:13.892 "suffix": "-pre", 00:06:13.892 "commit": "3a41ae5b3" 00:06:13.892 } 00:06:13.892 } 00:06:13.892 15:40:23 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:13.892 15:40:23 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:13.892 15:40:23 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:13.892 15:40:23 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:13.892 15:40:23 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:13.892 15:40:23 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:13.892 15:40:23 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.892 15:40:23 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:13.892 15:40:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.892 15:40:23 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.892 15:40:23 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:13.892 15:40:23 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:13.892 15:40:23 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.892 15:40:23 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:13.892 15:40:23 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.892 15:40:23 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.892 15:40:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.892 15:40:23 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.892 15:40:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.892 15:40:23 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.892 15:40:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.892 15:40:23 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.892 15:40:23 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:13.892 15:40:23 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.151 request: 00:06:14.151 { 00:06:14.151 "method": "env_dpdk_get_mem_stats", 00:06:14.151 "req_id": 1 00:06:14.151 } 00:06:14.151 Got JSON-RPC error response 00:06:14.151 response: 00:06:14.151 { 00:06:14.151 "code": -32601, 00:06:14.151 "message": "Method not found" 00:06:14.151 } 00:06:14.151 15:40:24 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:14.151 15:40:24 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.151 15:40:24 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.151 15:40:24 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.151 15:40:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2253607 00:06:14.151 15:40:24 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2253607 ']' 00:06:14.151 15:40:24 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2253607 00:06:14.151 15:40:24 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:14.151 15:40:24 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.151 15:40:24 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2253607 00:06:14.151 15:40:24 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.151 15:40:24 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.151 15:40:24 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2253607' 00:06:14.151 killing process with pid 2253607 00:06:14.151 15:40:24 app_cmdline -- common/autotest_common.sh@969 -- # kill 2253607 00:06:14.151 15:40:24 app_cmdline -- common/autotest_common.sh@974 -- # wait 2253607 00:06:14.409 00:06:14.409 real 0m1.831s 00:06:14.409 user 0m2.188s 00:06:14.409 sys 0m0.458s 00:06:14.409 15:40:24 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.409 15:40:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.409 ************************************ 00:06:14.409 END TEST app_cmdline 00:06:14.409 ************************************ 00:06:14.409 15:40:24 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:14.409 15:40:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.409 15:40:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.409 15:40:24 -- common/autotest_common.sh@10 -- # set +x 00:06:14.669 ************************************ 00:06:14.669 START TEST version 00:06:14.669 ************************************ 00:06:14.669 15:40:24 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:14.669 * Looking for test storage... 00:06:14.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:14.669 15:40:24 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.669 15:40:24 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.669 15:40:24 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.669 15:40:24 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.669 15:40:24 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.669 15:40:24 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.669 15:40:24 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.669 15:40:24 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.669 15:40:24 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.669 15:40:24 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.669 15:40:24 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.669 15:40:24 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.669 15:40:24 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.669 15:40:24 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.669 15:40:24 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.669 15:40:24 version -- scripts/common.sh@344 -- # case "$op" in 00:06:14.669 15:40:24 version -- scripts/common.sh@345 -- # : 1 00:06:14.669 15:40:24 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.669 15:40:24 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.669 15:40:24 version -- scripts/common.sh@365 -- # decimal 1 00:06:14.669 15:40:24 version -- scripts/common.sh@353 -- # local d=1 00:06:14.669 15:40:24 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.669 15:40:24 version -- scripts/common.sh@355 -- # echo 1 00:06:14.669 15:40:24 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.669 15:40:24 version -- scripts/common.sh@366 -- # decimal 2 00:06:14.669 15:40:24 version -- scripts/common.sh@353 -- # local d=2 00:06:14.669 15:40:24 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.669 15:40:24 version -- scripts/common.sh@355 -- # echo 2 00:06:14.669 15:40:24 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.669 15:40:24 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.669 15:40:24 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.669 15:40:24 version -- scripts/common.sh@368 -- # return 0 00:06:14.669 15:40:24 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.669 15:40:24 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.669 --rc genhtml_branch_coverage=1 00:06:14.669 --rc genhtml_function_coverage=1 00:06:14.669 --rc genhtml_legend=1 00:06:14.669 --rc geninfo_all_blocks=1 00:06:14.669 --rc geninfo_unexecuted_blocks=1 00:06:14.669 00:06:14.669 ' 00:06:14.669 15:40:24 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.669 --rc genhtml_branch_coverage=1 00:06:14.669 --rc genhtml_function_coverage=1 00:06:14.669 --rc genhtml_legend=1 00:06:14.669 --rc geninfo_all_blocks=1 00:06:14.669 --rc geninfo_unexecuted_blocks=1 00:06:14.669 00:06:14.669 ' 00:06:14.669 15:40:24 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.669 --rc genhtml_branch_coverage=1 00:06:14.669 --rc genhtml_function_coverage=1 00:06:14.669 --rc genhtml_legend=1 00:06:14.669 --rc geninfo_all_blocks=1 00:06:14.669 --rc geninfo_unexecuted_blocks=1 00:06:14.669 00:06:14.669 ' 00:06:14.669 15:40:24 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.669 --rc genhtml_branch_coverage=1 00:06:14.669 --rc genhtml_function_coverage=1 00:06:14.669 --rc genhtml_legend=1 00:06:14.669 --rc geninfo_all_blocks=1 00:06:14.669 --rc geninfo_unexecuted_blocks=1 00:06:14.669 00:06:14.669 ' 00:06:14.669 15:40:24 version -- app/version.sh@17 -- # get_header_version major 00:06:14.669 15:40:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.669 15:40:24 version -- app/version.sh@14 -- # cut -f2 00:06:14.669 15:40:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.669 15:40:24 version -- app/version.sh@17 -- # major=25 00:06:14.669 15:40:24 version -- app/version.sh@18 -- # get_header_version minor 00:06:14.669 15:40:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.669 15:40:24 version -- app/version.sh@14 -- # cut -f2 00:06:14.669 15:40:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.669 15:40:24 version -- app/version.sh@18 -- # minor=1 00:06:14.669 15:40:24 version -- app/version.sh@19 -- # get_header_version patch 00:06:14.669 15:40:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.669 15:40:24 version -- app/version.sh@14 -- # cut -f2 00:06:14.669 15:40:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.669 15:40:24 version -- app/version.sh@19 -- # patch=0 00:06:14.669 15:40:24 version -- app/version.sh@20 -- # get_header_version suffix 00:06:14.669 15:40:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.669 15:40:24 version -- app/version.sh@14 -- # cut -f2 00:06:14.669 15:40:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.669 15:40:24 version -- app/version.sh@20 -- # suffix=-pre 00:06:14.669 15:40:24 version -- app/version.sh@22 -- # version=25.1 00:06:14.669 15:40:24 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:14.669 15:40:24 version -- app/version.sh@28 -- # version=25.1rc0 00:06:14.669 15:40:24 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:14.669 15:40:24 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:14.928 15:40:24 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:14.928 15:40:24 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:14.928 00:06:14.928 real 0m0.243s 00:06:14.928 user 0m0.160s 00:06:14.928 sys 0m0.121s 00:06:14.928 15:40:24 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.928 15:40:24 version -- common/autotest_common.sh@10 -- # set +x 00:06:14.928 ************************************ 00:06:14.928 END TEST version 00:06:14.928 ************************************ 00:06:14.928 15:40:24 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:14.928 15:40:24 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:14.928 15:40:24 -- spdk/autotest.sh@194 -- # uname -s 00:06:14.928 15:40:24 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:14.928 15:40:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:14.928 15:40:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:14.928 15:40:24 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:14.928 15:40:24 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:14.928 15:40:24 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:14.928 15:40:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:14.928 15:40:24 -- common/autotest_common.sh@10 -- # set +x 00:06:14.928 15:40:24 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:14.928 15:40:24 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:14.928 15:40:24 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:14.928 15:40:24 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:14.928 15:40:24 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:14.928 15:40:24 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:14.928 15:40:24 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:14.928 15:40:24 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:14.928 15:40:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.928 15:40:24 -- common/autotest_common.sh@10 -- # set +x 00:06:14.928 ************************************ 00:06:14.928 START TEST nvmf_tcp 00:06:14.928 ************************************ 00:06:14.928 15:40:24 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:14.928 * Looking for test storage... 00:06:14.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:14.928 15:40:25 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.928 15:40:25 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.928 15:40:25 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:15.187 15:40:25 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.187 15:40:25 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:15.187 15:40:25 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.187 15:40:25 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:15.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.187 --rc genhtml_branch_coverage=1 00:06:15.187 --rc genhtml_function_coverage=1 00:06:15.187 --rc genhtml_legend=1 00:06:15.187 --rc geninfo_all_blocks=1 00:06:15.187 --rc geninfo_unexecuted_blocks=1 00:06:15.187 00:06:15.187 ' 00:06:15.187 15:40:25 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:15.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.187 --rc genhtml_branch_coverage=1 00:06:15.187 --rc genhtml_function_coverage=1 00:06:15.187 --rc genhtml_legend=1 00:06:15.187 --rc geninfo_all_blocks=1 00:06:15.187 --rc geninfo_unexecuted_blocks=1 00:06:15.187 00:06:15.187 ' 00:06:15.187 15:40:25 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:15.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.187 --rc genhtml_branch_coverage=1 00:06:15.187 --rc genhtml_function_coverage=1 00:06:15.187 --rc genhtml_legend=1 00:06:15.187 --rc geninfo_all_blocks=1 00:06:15.187 --rc geninfo_unexecuted_blocks=1 00:06:15.187 00:06:15.187 ' 00:06:15.187 15:40:25 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:15.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.187 --rc genhtml_branch_coverage=1 00:06:15.187 --rc genhtml_function_coverage=1 00:06:15.187 --rc genhtml_legend=1 00:06:15.187 --rc geninfo_all_blocks=1 00:06:15.187 --rc geninfo_unexecuted_blocks=1 00:06:15.187 00:06:15.187 ' 00:06:15.187 15:40:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:15.187 15:40:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:15.187 15:40:25 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:15.187 15:40:25 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:15.187 15:40:25 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.187 15:40:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.187 ************************************ 00:06:15.187 START TEST nvmf_target_core 00:06:15.187 ************************************ 00:06:15.187 15:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:15.187 * Looking for test storage... 00:06:15.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:15.187 15:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:15.187 15:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:06:15.187 15:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:15.187 15:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:15.187 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.187 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.187 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.187 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.187 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.187 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.187 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.187 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:15.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.188 --rc genhtml_branch_coverage=1 00:06:15.188 --rc genhtml_function_coverage=1 00:06:15.188 --rc genhtml_legend=1 00:06:15.188 --rc geninfo_all_blocks=1 00:06:15.188 --rc geninfo_unexecuted_blocks=1 00:06:15.188 00:06:15.188 ' 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:15.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.188 --rc genhtml_branch_coverage=1 00:06:15.188 --rc genhtml_function_coverage=1 00:06:15.188 --rc genhtml_legend=1 00:06:15.188 --rc geninfo_all_blocks=1 00:06:15.188 --rc geninfo_unexecuted_blocks=1 00:06:15.188 00:06:15.188 ' 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:15.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.188 --rc genhtml_branch_coverage=1 00:06:15.188 --rc genhtml_function_coverage=1 00:06:15.188 --rc genhtml_legend=1 00:06:15.188 --rc geninfo_all_blocks=1 00:06:15.188 --rc geninfo_unexecuted_blocks=1 00:06:15.188 00:06:15.188 ' 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:15.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.188 --rc genhtml_branch_coverage=1 00:06:15.188 --rc genhtml_function_coverage=1 00:06:15.188 --rc genhtml_legend=1 00:06:15.188 --rc geninfo_all_blocks=1 00:06:15.188 --rc geninfo_unexecuted_blocks=1 00:06:15.188 00:06:15.188 ' 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.188 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.447 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:15.447 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:15.447 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.447 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.447 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.447 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.447 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.447 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.447 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.447 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.447 15:40:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.447 15:40:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.448 ************************************ 00:06:15.448 START TEST nvmf_abort 00:06:15.448 ************************************ 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:15.448 * Looking for test storage... 00:06:15.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:15.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.448 --rc genhtml_branch_coverage=1 00:06:15.448 --rc genhtml_function_coverage=1 00:06:15.448 --rc genhtml_legend=1 00:06:15.448 --rc geninfo_all_blocks=1 00:06:15.448 --rc geninfo_unexecuted_blocks=1 00:06:15.448 00:06:15.448 ' 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:15.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.448 --rc genhtml_branch_coverage=1 00:06:15.448 --rc genhtml_function_coverage=1 00:06:15.448 --rc genhtml_legend=1 00:06:15.448 --rc geninfo_all_blocks=1 00:06:15.448 --rc geninfo_unexecuted_blocks=1 00:06:15.448 00:06:15.448 ' 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:15.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.448 --rc genhtml_branch_coverage=1 00:06:15.448 --rc genhtml_function_coverage=1 00:06:15.448 --rc genhtml_legend=1 00:06:15.448 --rc geninfo_all_blocks=1 00:06:15.448 --rc geninfo_unexecuted_blocks=1 00:06:15.448 00:06:15.448 ' 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:15.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.448 --rc genhtml_branch_coverage=1 00:06:15.448 --rc genhtml_function_coverage=1 00:06:15.448 --rc genhtml_legend=1 00:06:15.448 --rc geninfo_all_blocks=1 00:06:15.448 --rc geninfo_unexecuted_blocks=1 00:06:15.448 00:06:15.448 ' 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.448 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.449 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.708 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:15.708 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:15.708 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:15.708 15:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:22.332 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:22.332 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:22.332 Found net devices under 0000:86:00.0: cvl_0_0 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:22.332 Found net devices under 0000:86:00.1: cvl_0_1 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:22.332 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:22.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:06:22.333 00:06:22.333 --- 10.0.0.2 ping statistics --- 00:06:22.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.333 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:22.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:06:22.333 00:06:22.333 --- 10.0.0.1 ping statistics --- 00:06:22.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.333 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=2257294 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 2257294 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2257294 ']' 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.333 15:40:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.333 [2024-10-01 15:40:31.669584] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:22.333 [2024-10-01 15:40:31.669632] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.333 [2024-10-01 15:40:31.743897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.333 [2024-10-01 15:40:31.825684] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:22.333 [2024-10-01 15:40:31.825719] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:22.333 [2024-10-01 15:40:31.825726] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.333 [2024-10-01 15:40:31.825732] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.333 [2024-10-01 15:40:31.825737] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:22.333 [2024-10-01 15:40:31.825880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.333 [2024-10-01 15:40:31.825956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.333 [2024-10-01 15:40:31.825956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.333 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.333 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:22.333 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:22.333 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.333 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.591 [2024-10-01 15:40:32.559918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.591 Malloc0 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.591 Delay0 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.591 [2024-10-01 15:40:32.637671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.591 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:22.591 [2024-10-01 15:40:32.755533] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:25.115 Initializing NVMe Controllers 00:06:25.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:25.115 controller IO queue size 128 less than required 00:06:25.115 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:25.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:25.115 Initialization complete. Launching workers. 00:06:25.115 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37325 00:06:25.115 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37386, failed to submit 62 00:06:25.115 success 37329, unsuccessful 57, failed 0 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:25.115 rmmod nvme_tcp 00:06:25.115 rmmod nvme_fabrics 00:06:25.115 rmmod nvme_keyring 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 2257294 ']' 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 2257294 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2257294 ']' 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2257294 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2257294 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2257294' 00:06:25.115 killing process with pid 2257294 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2257294 00:06:25.115 15:40:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2257294 00:06:25.115 15:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:25.115 15:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:25.115 15:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:25.115 15:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:25.115 15:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:06:25.115 15:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:25.115 15:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:06:25.115 15:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:25.115 15:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:25.115 15:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.115 15:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.115 15:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:27.648 00:06:27.648 real 0m11.816s 00:06:27.648 user 0m13.529s 00:06:27.648 sys 0m5.406s 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:27.648 ************************************ 00:06:27.648 END TEST nvmf_abort 00:06:27.648 ************************************ 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:27.648 ************************************ 00:06:27.648 START TEST nvmf_ns_hotplug_stress 00:06:27.648 ************************************ 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:27.648 * Looking for test storage... 00:06:27.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:27.648 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:27.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.649 --rc genhtml_branch_coverage=1 00:06:27.649 --rc genhtml_function_coverage=1 00:06:27.649 --rc genhtml_legend=1 00:06:27.649 --rc geninfo_all_blocks=1 00:06:27.649 --rc geninfo_unexecuted_blocks=1 00:06:27.649 00:06:27.649 ' 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:27.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.649 --rc genhtml_branch_coverage=1 00:06:27.649 --rc genhtml_function_coverage=1 00:06:27.649 --rc genhtml_legend=1 00:06:27.649 --rc geninfo_all_blocks=1 00:06:27.649 --rc geninfo_unexecuted_blocks=1 00:06:27.649 00:06:27.649 ' 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:27.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.649 --rc genhtml_branch_coverage=1 00:06:27.649 --rc genhtml_function_coverage=1 00:06:27.649 --rc genhtml_legend=1 00:06:27.649 --rc geninfo_all_blocks=1 00:06:27.649 --rc geninfo_unexecuted_blocks=1 00:06:27.649 00:06:27.649 ' 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:27.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.649 --rc genhtml_branch_coverage=1 00:06:27.649 --rc genhtml_function_coverage=1 00:06:27.649 --rc genhtml_legend=1 00:06:27.649 --rc geninfo_all_blocks=1 00:06:27.649 --rc geninfo_unexecuted_blocks=1 00:06:27.649 00:06:27.649 ' 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:27.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:27.649 15:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:34.212 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:34.212 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:34.212 Found net devices under 0000:86:00.0: cvl_0_0 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:34.212 Found net devices under 0000:86:00.1: cvl_0_1 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:34.212 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:34.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:34.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:06:34.213 00:06:34.213 --- 10.0.0.2 ping statistics --- 00:06:34.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.213 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:34.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:34.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:06:34.213 00:06:34.213 --- 10.0.0.1 ping statistics --- 00:06:34.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.213 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=2261500 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 2261500 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2261500 ']' 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.213 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:34.213 [2024-10-01 15:40:43.665822] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:06:34.213 [2024-10-01 15:40:43.665892] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.213 [2024-10-01 15:40:43.738194] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.213 [2024-10-01 15:40:43.811431] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.213 [2024-10-01 15:40:43.811468] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.213 [2024-10-01 15:40:43.811475] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.213 [2024-10-01 15:40:43.811484] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.213 [2024-10-01 15:40:43.811489] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.213 [2024-10-01 15:40:43.811606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.213 [2024-10-01 15:40:43.811632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.213 [2024-10-01 15:40:43.811633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.471 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.471 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:34.471 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:34.471 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:34.471 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:34.471 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:34.471 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:34.471 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:34.729 [2024-10-01 15:40:44.686597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.729 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:34.988 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:34.988 [2024-10-01 15:40:45.116345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.988 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:35.246 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:35.504 Malloc0 00:06:35.504 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:35.764 Delay0 00:06:35.764 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.764 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:36.022 NULL1 00:06:36.022 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:36.294 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2261972 00:06:36.294 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:36.294 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:36.294 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.552 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.552 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:36.552 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:36.816 true 00:06:36.816 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:36.816 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.075 15:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.331 15:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:37.331 15:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:37.589 true 00:06:37.589 15:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:37.589 15:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.589 15:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.846 15:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:37.846 15:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:38.103 true 00:06:38.103 15:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:38.103 15:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.361 15:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.619 15:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:38.619 15:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:38.619 true 00:06:38.877 15:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:38.877 15:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.877 15:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.135 15:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:39.135 15:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:39.393 true 00:06:39.393 15:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:39.393 15:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.650 15:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.908 15:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:39.909 15:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:39.909 true 00:06:39.909 15:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:39.909 15:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.167 15:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.426 15:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:40.426 15:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:40.684 true 00:06:40.684 15:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:40.684 15:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.942 15:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.200 15:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:41.200 15:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:41.200 true 00:06:41.200 15:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:41.200 15:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.457 15:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.715 15:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:41.715 15:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:41.972 true 00:06:41.972 15:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:41.972 15:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.230 15:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.230 15:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:42.230 15:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:42.487 true 00:06:42.487 15:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:42.488 15:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.746 15:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.004 15:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:43.004 15:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:43.261 true 00:06:43.261 15:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:43.261 15:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.261 15:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.518 15:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:43.518 15:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:43.778 true 00:06:43.778 15:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:43.778 15:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.039 15:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.297 15:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:44.297 15:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:44.297 true 00:06:44.297 15:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:44.297 15:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.560 15:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.818 15:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:44.818 15:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:45.076 true 00:06:45.076 15:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:45.076 15:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.333 15:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.333 15:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:45.333 15:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:45.592 true 00:06:45.592 15:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:45.592 15:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.850 15:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.108 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:46.108 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:46.365 true 00:06:46.365 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:46.365 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.623 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.623 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:46.623 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:46.880 true 00:06:46.880 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:46.880 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.138 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.396 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:47.396 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:47.654 true 00:06:47.655 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:47.655 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.912 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.912 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:47.912 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:48.170 true 00:06:48.170 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:48.170 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.427 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.685 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:48.685 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:48.944 true 00:06:48.944 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:48.944 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.201 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.201 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:49.201 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:49.459 true 00:06:49.459 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:49.459 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.717 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.975 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:49.975 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:50.233 true 00:06:50.233 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:50.233 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.492 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.492 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:50.492 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:50.750 true 00:06:50.750 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:50.750 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.007 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.265 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:51.265 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:51.523 true 00:06:51.523 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:51.523 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.781 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.781 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:51.781 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:52.039 true 00:06:52.039 15:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:52.039 15:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.296 15:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.554 15:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:52.554 15:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:52.813 true 00:06:52.813 15:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:52.813 15:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.813 15:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.070 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:53.070 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:53.329 true 00:06:53.329 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:53.329 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.588 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.848 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:53.848 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:53.848 true 00:06:54.106 15:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:54.106 15:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.106 15:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.379 15:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:54.379 15:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:54.685 true 00:06:54.685 15:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:54.685 15:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.982 15:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.982 15:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:54.982 15:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:55.241 true 00:06:55.242 15:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:55.242 15:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.501 15:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.759 15:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:55.759 15:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:56.018 true 00:06:56.018 15:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:56.018 15:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.276 15:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.276 15:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:56.276 15:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:56.535 true 00:06:56.535 15:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:56.535 15:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.795 15:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.053 15:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:57.053 15:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:57.311 true 00:06:57.311 15:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:57.311 15:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.570 15:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.570 15:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:57.570 15:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:57.830 true 00:06:57.830 15:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:57.830 15:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.088 15:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.346 15:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:58.346 15:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:58.605 true 00:06:58.605 15:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:58.605 15:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.863 15:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.863 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:58.863 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:59.120 true 00:06:59.120 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:59.120 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.377 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.635 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:59.635 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:59.894 true 00:06:59.894 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:06:59.894 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.152 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.152 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:00.152 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:00.410 true 00:07:00.410 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:07:00.410 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.668 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.926 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:00.926 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:01.206 true 00:07:01.206 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:07:01.206 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.206 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.464 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:01.464 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:01.722 true 00:07:01.722 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:07:01.722 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.979 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.236 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:02.236 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:02.494 true 00:07:02.494 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:07:02.494 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.751 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.751 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:02.751 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:03.009 true 00:07:03.009 15:41:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:07:03.009 15:41:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.267 15:41:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.525 15:41:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:03.525 15:41:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:03.783 true 00:07:03.783 15:41:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:07:03.783 15:41:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.041 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.041 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:04.041 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:04.300 true 00:07:04.300 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:07:04.300 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.559 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.818 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:04.818 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:05.076 true 00:07:05.076 15:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:07:05.076 15:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.335 15:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.335 15:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:05.335 15:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:05.593 true 00:07:05.593 15:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:07:05.593 15:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.851 15:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.109 15:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:06.109 15:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:06.368 true 00:07:06.368 15:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:07:06.368 15:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.627 Initializing NVMe Controllers 00:07:06.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:06.627 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:07:06.627 Controller IO queue size 128, less than required. 00:07:06.627 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.627 WARNING: Some requested NVMe devices were skipped 00:07:06.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:06.627 Initialization complete. Launching workers. 00:07:06.627 ======================================================== 00:07:06.627 Latency(us) 00:07:06.627 Device Information : IOPS MiB/s Average min max 00:07:06.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27558.51 13.46 4644.57 2148.83 8634.64 00:07:06.627 ======================================================== 00:07:06.627 Total : 27558.51 13.46 4644.57 2148.83 8634.64 00:07:06.627 00:07:06.627 15:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.627 15:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:06.627 15:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:06.885 true 00:07:06.885 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2261972 00:07:06.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2261972) - No such process 00:07:06.885 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2261972 00:07:06.885 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.144 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.403 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:07.403 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:07.403 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:07.403 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.403 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:07.403 null0 00:07:07.403 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:07.403 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.403 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:07.661 null1 00:07:07.661 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:07.661 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.661 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:07.920 null2 00:07:07.920 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:07.920 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.920 15:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:08.178 null3 00:07:08.178 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.178 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.178 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:08.178 null4 00:07:08.437 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.437 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.437 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:08.437 null5 00:07:08.437 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.437 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.437 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:08.696 null6 00:07:08.696 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.696 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.696 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:08.956 null7 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.956 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.957 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.957 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.957 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:08.957 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.957 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2267502 2267504 2267506 2267508 2267510 2267512 2267514 2267517 00:07:08.957 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:08.957 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.957 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.957 15:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.215 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.474 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.734 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.992 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.992 15:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.992 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.992 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.992 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.992 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.992 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.992 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.992 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.992 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.992 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.251 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.511 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.770 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.770 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.770 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.770 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.771 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.771 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.771 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.771 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.030 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.289 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.548 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.548 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.548 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.548 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.548 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.548 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.548 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.548 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.548 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.548 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.548 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.807 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.065 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.065 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.065 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.065 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.065 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.065 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.065 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.065 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.065 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.065 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.065 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.065 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.065 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.065 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.324 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.325 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.584 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.843 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.843 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.843 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.843 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.843 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.843 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.843 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.844 15:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.103 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:13.104 rmmod nvme_tcp 00:07:13.104 rmmod nvme_fabrics 00:07:13.104 rmmod nvme_keyring 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 2261500 ']' 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 2261500 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2261500 ']' 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2261500 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2261500 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2261500' 00:07:13.104 killing process with pid 2261500 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2261500 00:07:13.104 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2261500 00:07:13.362 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:13.362 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:13.362 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:13.362 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:13.362 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:07:13.362 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:13.362 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:07:13.363 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:13.363 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:13.363 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.363 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.363 15:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:15.910 00:07:15.910 real 0m48.217s 00:07:15.910 user 3m23.399s 00:07:15.910 sys 0m17.499s 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:15.910 ************************************ 00:07:15.910 END TEST nvmf_ns_hotplug_stress 00:07:15.910 ************************************ 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.910 ************************************ 00:07:15.910 START TEST nvmf_delete_subsystem 00:07:15.910 ************************************ 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:15.910 * Looking for test storage... 00:07:15.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:15.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.910 --rc genhtml_branch_coverage=1 00:07:15.910 --rc genhtml_function_coverage=1 00:07:15.910 --rc genhtml_legend=1 00:07:15.910 --rc geninfo_all_blocks=1 00:07:15.910 --rc geninfo_unexecuted_blocks=1 00:07:15.910 00:07:15.910 ' 00:07:15.910 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:15.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.910 --rc genhtml_branch_coverage=1 00:07:15.910 --rc genhtml_function_coverage=1 00:07:15.910 --rc genhtml_legend=1 00:07:15.910 --rc geninfo_all_blocks=1 00:07:15.910 --rc geninfo_unexecuted_blocks=1 00:07:15.910 00:07:15.910 ' 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:15.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.911 --rc genhtml_branch_coverage=1 00:07:15.911 --rc genhtml_function_coverage=1 00:07:15.911 --rc genhtml_legend=1 00:07:15.911 --rc geninfo_all_blocks=1 00:07:15.911 --rc geninfo_unexecuted_blocks=1 00:07:15.911 00:07:15.911 ' 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:15.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.911 --rc genhtml_branch_coverage=1 00:07:15.911 --rc genhtml_function_coverage=1 00:07:15.911 --rc genhtml_legend=1 00:07:15.911 --rc geninfo_all_blocks=1 00:07:15.911 --rc geninfo_unexecuted_blocks=1 00:07:15.911 00:07:15.911 ' 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:15.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:15.911 15:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:22.480 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:22.480 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:22.481 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:22.481 Found net devices under 0000:86:00.0: cvl_0_0 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:22.481 Found net devices under 0000:86:00.1: cvl_0_1 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:22.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:07:22.481 00:07:22.481 --- 10.0.0.2 ping statistics --- 00:07:22.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.481 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:22.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:07:22.481 00:07:22.481 --- 10.0.0.1 ping statistics --- 00:07:22.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.481 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=2272094 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 2272094 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2272094 ']' 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.481 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.482 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.482 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.482 [2024-10-01 15:41:31.897487] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:22.482 [2024-10-01 15:41:31.897538] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.482 [2024-10-01 15:41:31.970370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.482 [2024-10-01 15:41:32.049267] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.482 [2024-10-01 15:41:32.049300] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.482 [2024-10-01 15:41:32.049307] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.482 [2024-10-01 15:41:32.049313] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.482 [2024-10-01 15:41:32.049318] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.482 [2024-10-01 15:41:32.049386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.482 [2024-10-01 15:41:32.049386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.740 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.740 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:22.740 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:22.740 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:22.740 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.741 [2024-10-01 15:41:32.775090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.741 [2024-10-01 15:41:32.795256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.741 NULL1 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.741 Delay0 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2272182 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:22.741 15:41:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:22.741 [2024-10-01 15:41:32.897033] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:25.273 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:25.273 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.273 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 [2024-10-01 15:41:35.114595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe1570 is same with the state(6) to be set 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 starting I/O failed: -6 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 [2024-10-01 15:41:35.116067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f764800d470 is same with the state(6) to be set 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Read completed with error (sct=0, sc=8) 00:07:25.273 Write completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Write completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:25.274 Read completed with error (sct=0, sc=8) 00:07:26.209 [2024-10-01 15:41:36.073674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe2a70 is same with the state(6) to be set 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 [2024-10-01 15:41:36.118497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe1390 is same with the state(6) to be set 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 [2024-10-01 15:41:36.118622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe1750 is same with the state(6) to be set 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.209 Write completed with error (sct=0, sc=8) 00:07:26.209 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Write completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Write completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Write completed with error (sct=0, sc=8) 00:07:26.210 Write completed with error (sct=0, sc=8) 00:07:26.210 Write completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 [2024-10-01 15:41:36.118867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f764800cfe0 is same with the state(6) to be set 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Write completed with error (sct=0, sc=8) 00:07:26.210 Write completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Write completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Write completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Write completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 Write completed with error (sct=0, sc=8) 00:07:26.210 Read completed with error (sct=0, sc=8) 00:07:26.210 [2024-10-01 15:41:36.119403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f764800d7a0 is same with the state(6) to be set 00:07:26.210 Initializing NVMe Controllers 00:07:26.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:26.210 Controller IO queue size 128, less than required. 00:07:26.210 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:26.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:26.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:26.210 Initialization complete. Launching workers. 00:07:26.210 ======================================================== 00:07:26.210 Latency(us) 00:07:26.210 Device Information : IOPS MiB/s Average min max 00:07:26.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.76 0.08 900464.15 326.77 1044255.87 00:07:26.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.80 0.08 911283.57 224.67 1014235.03 00:07:26.210 ======================================================== 00:07:26.210 Total : 331.55 0.16 905776.68 224.67 1044255.87 00:07:26.210 00:07:26.210 [2024-10-01 15:41:36.119923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe2a70 (9): Bad file descriptor 00:07:26.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:26.210 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.210 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:26.210 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2272182 00:07:26.210 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:26.468 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:26.468 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2272182 00:07:26.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2272182) - No such process 00:07:26.468 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2272182 00:07:26.468 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:26.468 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2272182 00:07:26.468 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:26.468 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.468 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:26.468 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.468 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2272182 00:07:26.468 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:26.468 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:26.469 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:26.469 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:26.469 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:26.469 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.469 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.469 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.469 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.469 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.469 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.469 [2024-10-01 15:41:36.647522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.469 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.469 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.469 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.469 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.730 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.730 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2272823 00:07:26.730 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:26.730 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:26.730 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2272823 00:07:26.730 15:41:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.730 [2024-10-01 15:41:36.727765] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:27.011 15:41:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.011 15:41:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2272823 00:07:27.011 15:41:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.619 15:41:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.619 15:41:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2272823 00:07:27.619 15:41:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.186 15:41:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:28.187 15:41:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2272823 00:07:28.187 15:41:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.752 15:41:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:28.752 15:41:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2272823 00:07:28.752 15:41:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:29.011 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:29.011 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2272823 00:07:29.011 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:29.577 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:29.577 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2272823 00:07:29.578 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:29.835 Initializing NVMe Controllers 00:07:29.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:29.835 Controller IO queue size 128, less than required. 00:07:29.835 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:29.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:29.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:29.835 Initialization complete. Launching workers. 00:07:29.835 ======================================================== 00:07:29.835 Latency(us) 00:07:29.835 Device Information : IOPS MiB/s Average min max 00:07:29.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002372.71 1000169.48 1040691.68 00:07:29.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004159.29 1000121.52 1011201.44 00:07:29.835 ======================================================== 00:07:29.835 Total : 256.00 0.12 1003266.00 1000121.52 1040691.68 00:07:29.835 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2272823 00:07:30.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2272823) - No such process 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2272823 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:30.093 rmmod nvme_tcp 00:07:30.093 rmmod nvme_fabrics 00:07:30.093 rmmod nvme_keyring 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 2272094 ']' 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 2272094 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2272094 ']' 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2272094 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.093 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2272094 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2272094' 00:07:30.351 killing process with pid 2272094 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2272094 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2272094 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.351 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.883 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:32.883 00:07:32.883 real 0m16.968s 00:07:32.883 user 0m30.867s 00:07:32.883 sys 0m5.631s 00:07:32.883 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.883 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.883 ************************************ 00:07:32.883 END TEST nvmf_delete_subsystem 00:07:32.883 ************************************ 00:07:32.883 15:41:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:32.883 15:41:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:32.883 15:41:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.883 15:41:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:32.883 ************************************ 00:07:32.883 START TEST nvmf_host_management 00:07:32.883 ************************************ 00:07:32.883 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:32.883 * Looking for test storage... 00:07:32.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.883 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:32.883 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:32.883 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:32.883 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:32.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.884 --rc genhtml_branch_coverage=1 00:07:32.884 --rc genhtml_function_coverage=1 00:07:32.884 --rc genhtml_legend=1 00:07:32.884 --rc geninfo_all_blocks=1 00:07:32.884 --rc geninfo_unexecuted_blocks=1 00:07:32.884 00:07:32.884 ' 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:32.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.884 --rc genhtml_branch_coverage=1 00:07:32.884 --rc genhtml_function_coverage=1 00:07:32.884 --rc genhtml_legend=1 00:07:32.884 --rc geninfo_all_blocks=1 00:07:32.884 --rc geninfo_unexecuted_blocks=1 00:07:32.884 00:07:32.884 ' 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:32.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.884 --rc genhtml_branch_coverage=1 00:07:32.884 --rc genhtml_function_coverage=1 00:07:32.884 --rc genhtml_legend=1 00:07:32.884 --rc geninfo_all_blocks=1 00:07:32.884 --rc geninfo_unexecuted_blocks=1 00:07:32.884 00:07:32.884 ' 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:32.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.884 --rc genhtml_branch_coverage=1 00:07:32.884 --rc genhtml_function_coverage=1 00:07:32.884 --rc genhtml_legend=1 00:07:32.884 --rc geninfo_all_blocks=1 00:07:32.884 --rc geninfo_unexecuted_blocks=1 00:07:32.884 00:07:32.884 ' 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.884 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:32.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:32.885 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:39.450 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:39.450 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:39.450 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:39.451 Found net devices under 0000:86:00.0: cvl_0_0 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:39.451 Found net devices under 0000:86:00.1: cvl_0_1 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:39.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:07:39.451 00:07:39.451 --- 10.0.0.2 ping statistics --- 00:07:39.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.451 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:07:39.451 00:07:39.451 --- 10.0.0.1 ping statistics --- 00:07:39.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.451 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=2277061 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 2277061 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2277061 ']' 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.451 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.451 [2024-10-01 15:41:48.984977] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:39.451 [2024-10-01 15:41:48.985029] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.451 [2024-10-01 15:41:49.058051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.451 [2024-10-01 15:41:49.132119] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.451 [2024-10-01 15:41:49.132183] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.451 [2024-10-01 15:41:49.132190] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.451 [2024-10-01 15:41:49.132196] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.451 [2024-10-01 15:41:49.132201] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.451 [2024-10-01 15:41:49.132330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.451 [2024-10-01 15:41:49.132441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.451 [2024-10-01 15:41:49.132526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.451 [2024-10-01 15:41:49.132527] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.709 [2024-10-01 15:41:49.875731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.709 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.967 Malloc0 00:07:39.967 [2024-10-01 15:41:49.935152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2277330 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2277330 /var/tmp/bdevperf.sock 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2277330 ']' 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:39.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:39.967 { 00:07:39.967 "params": { 00:07:39.967 "name": "Nvme$subsystem", 00:07:39.967 "trtype": "$TEST_TRANSPORT", 00:07:39.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.967 "adrfam": "ipv4", 00:07:39.967 "trsvcid": "$NVMF_PORT", 00:07:39.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.967 "hdgst": ${hdgst:-false}, 00:07:39.967 "ddgst": ${ddgst:-false} 00:07:39.967 }, 00:07:39.967 "method": "bdev_nvme_attach_controller" 00:07:39.967 } 00:07:39.967 EOF 00:07:39.967 )") 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:07:39.967 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:39.967 "params": { 00:07:39.967 "name": "Nvme0", 00:07:39.967 "trtype": "tcp", 00:07:39.967 "traddr": "10.0.0.2", 00:07:39.967 "adrfam": "ipv4", 00:07:39.967 "trsvcid": "4420", 00:07:39.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:39.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:39.967 "hdgst": false, 00:07:39.967 "ddgst": false 00:07:39.967 }, 00:07:39.967 "method": "bdev_nvme_attach_controller" 00:07:39.967 }' 00:07:39.967 [2024-10-01 15:41:50.030509] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:39.967 [2024-10-01 15:41:50.030558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277330 ] 00:07:39.967 [2024-10-01 15:41:50.100205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.225 [2024-10-01 15:41:50.174353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.483 Running I/O for 10 seconds... 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.742 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.001 [2024-10-01 15:41:50.935963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.001 [2024-10-01 15:41:50.936172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbb240 is same with the state(6) to be set 00:07:41.002 [2024-10-01 15:41:50.936511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.002 [2024-10-01 15:41:50.936846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.002 [2024-10-01 15:41:50.936853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.936861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.936873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.936881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.936888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.936895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.936903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.936914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.936920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.936929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.936935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.936943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.936949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.936957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.936964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.936973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.936980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.936989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.936996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.003 [2024-10-01 15:41:50.937444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.003 [2024-10-01 15:41:50.937450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.004 [2024-10-01 15:41:50.937458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.004 [2024-10-01 15:41:50.937464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.004 [2024-10-01 15:41:50.937472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.004 [2024-10-01 15:41:50.937478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.004 [2024-10-01 15:41:50.937487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.004 [2024-10-01 15:41:50.937495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.004 [2024-10-01 15:41:50.937504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.004 [2024-10-01 15:41:50.937511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.004 [2024-10-01 15:41:50.937519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:41.004 [2024-10-01 15:41:50.937525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:41.004 [2024-10-01 15:41:50.937533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ea8a0 is same with the state(6) to be set 00:07:41.004 [2024-10-01 15:41:50.937586] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8ea8a0 was disconnected and freed. reset controller. 00:07:41.004 [2024-10-01 15:41:50.938507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:41.004 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.004 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:41.004 task offset: 106496 on job bdev=Nvme0n1 fails 00:07:41.004 00:07:41.004 Latency(us) 00:07:41.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.004 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:41.004 Job: Nvme0n1 ended in about 0.44 seconds with error 00:07:41.004 Verification LBA range: start 0x0 length 0x400 00:07:41.004 Nvme0n1 : 0.44 1870.21 116.89 143.86 0.00 31006.62 3729.31 27088.21 00:07:41.004 =================================================================================================================== 00:07:41.004 Total : 1870.21 116.89 143.86 0.00 31006.62 3729.31 27088.21 00:07:41.004 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.004 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.004 [2024-10-01 15:41:50.940902] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.004 [2024-10-01 15:41:50.940925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d15d0 (9): Bad file descriptor 00:07:41.004 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.004 15:41:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:41.004 [2024-10-01 15:41:50.992978] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:41.938 15:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2277330 00:07:41.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2277330) - No such process 00:07:41.938 15:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:41.938 15:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:41.938 15:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:41.938 15:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:41.938 15:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:07:41.938 15:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:07:41.938 15:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:41.938 15:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:41.938 { 00:07:41.938 "params": { 00:07:41.938 "name": "Nvme$subsystem", 00:07:41.938 "trtype": "$TEST_TRANSPORT", 00:07:41.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:41.938 "adrfam": "ipv4", 00:07:41.938 "trsvcid": "$NVMF_PORT", 00:07:41.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:41.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:41.938 "hdgst": ${hdgst:-false}, 00:07:41.938 "ddgst": ${ddgst:-false} 00:07:41.938 }, 00:07:41.938 "method": "bdev_nvme_attach_controller" 00:07:41.938 } 00:07:41.938 EOF 00:07:41.938 )") 00:07:41.938 15:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:07:41.938 15:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:07:41.938 15:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:07:41.938 15:41:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:41.938 "params": { 00:07:41.938 "name": "Nvme0", 00:07:41.938 "trtype": "tcp", 00:07:41.938 "traddr": "10.0.0.2", 00:07:41.938 "adrfam": "ipv4", 00:07:41.938 "trsvcid": "4420", 00:07:41.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:41.938 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:41.938 "hdgst": false, 00:07:41.938 "ddgst": false 00:07:41.938 }, 00:07:41.938 "method": "bdev_nvme_attach_controller" 00:07:41.938 }' 00:07:41.938 [2024-10-01 15:41:52.007978] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:41.938 [2024-10-01 15:41:52.008024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277580 ] 00:07:41.938 [2024-10-01 15:41:52.075433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.196 [2024-10-01 15:41:52.146831] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.454 Running I/O for 1 seconds... 00:07:43.391 2048.00 IOPS, 128.00 MiB/s 00:07:43.391 Latency(us) 00:07:43.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.391 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:43.391 Verification LBA range: start 0x0 length 0x400 00:07:43.391 Nvme0n1 : 1.02 2062.07 128.88 0.00 0.00 30555.97 6179.11 26588.89 00:07:43.391 =================================================================================================================== 00:07:43.391 Total : 2062.07 128.88 0.00 0.00 30555.97 6179.11 26588.89 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.649 rmmod nvme_tcp 00:07:43.649 rmmod nvme_fabrics 00:07:43.649 rmmod nvme_keyring 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 2277061 ']' 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 2277061 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2277061 ']' 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2277061 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2277061 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2277061' 00:07:43.649 killing process with pid 2277061 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2277061 00:07:43.649 15:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2277061 00:07:43.907 [2024-10-01 15:41:54.010160] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:43.907 15:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:43.907 15:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:43.907 15:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:43.907 15:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:43.907 15:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:07:43.907 15:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:43.907 15:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:07:43.907 15:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:43.907 15:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:43.907 15:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.907 15:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.907 15:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:46.439 00:07:46.439 real 0m13.456s 00:07:46.439 user 0m23.987s 00:07:46.439 sys 0m5.748s 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.439 ************************************ 00:07:46.439 END TEST nvmf_host_management 00:07:46.439 ************************************ 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.439 ************************************ 00:07:46.439 START TEST nvmf_lvol 00:07:46.439 ************************************ 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:46.439 * Looking for test storage... 00:07:46.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:46.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.439 --rc genhtml_branch_coverage=1 00:07:46.439 --rc genhtml_function_coverage=1 00:07:46.439 --rc genhtml_legend=1 00:07:46.439 --rc geninfo_all_blocks=1 00:07:46.439 --rc geninfo_unexecuted_blocks=1 00:07:46.439 00:07:46.439 ' 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:46.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.439 --rc genhtml_branch_coverage=1 00:07:46.439 --rc genhtml_function_coverage=1 00:07:46.439 --rc genhtml_legend=1 00:07:46.439 --rc geninfo_all_blocks=1 00:07:46.439 --rc geninfo_unexecuted_blocks=1 00:07:46.439 00:07:46.439 ' 00:07:46.439 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:46.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.439 --rc genhtml_branch_coverage=1 00:07:46.439 --rc genhtml_function_coverage=1 00:07:46.439 --rc genhtml_legend=1 00:07:46.439 --rc geninfo_all_blocks=1 00:07:46.439 --rc geninfo_unexecuted_blocks=1 00:07:46.440 00:07:46.440 ' 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:46.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.440 --rc genhtml_branch_coverage=1 00:07:46.440 --rc genhtml_function_coverage=1 00:07:46.440 --rc genhtml_legend=1 00:07:46.440 --rc geninfo_all_blocks=1 00:07:46.440 --rc geninfo_unexecuted_blocks=1 00:07:46.440 00:07:46.440 ' 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:46.440 15:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:53.008 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:53.008 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.008 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:53.008 Found net devices under 0000:86:00.0: cvl_0_0 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:53.009 Found net devices under 0000:86:00.1: cvl_0_1 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:53.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:07:53.009 00:07:53.009 --- 10.0.0.2 ping statistics --- 00:07:53.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.009 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:07:53.009 00:07:53.009 --- 10.0.0.1 ping statistics --- 00:07:53.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.009 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=2281643 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 2281643 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2281643 ']' 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.009 15:42:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.009 [2024-10-01 15:42:02.478725] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:07:53.009 [2024-10-01 15:42:02.478768] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.009 [2024-10-01 15:42:02.552237] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.009 [2024-10-01 15:42:02.626138] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.009 [2024-10-01 15:42:02.626181] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.009 [2024-10-01 15:42:02.626188] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.009 [2024-10-01 15:42:02.626194] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.009 [2024-10-01 15:42:02.626200] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.009 [2024-10-01 15:42:02.626268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.009 [2024-10-01 15:42:02.626380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.009 [2024-10-01 15:42:02.626380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.268 15:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.268 15:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:53.268 15:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:53.268 15:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:53.268 15:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.268 15:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.268 15:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:53.526 [2024-10-01 15:42:03.505538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.526 15:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:53.784 15:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:53.784 15:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:54.042 15:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:54.042 15:42:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:54.042 15:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:54.301 15:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7db01d6e-084c-4ad0-b6f9-50f740518ec1 00:07:54.301 15:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7db01d6e-084c-4ad0-b6f9-50f740518ec1 lvol 20 00:07:54.560 15:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9b67d1f1-ff5a-4052-8f16-0123d378e484 00:07:54.560 15:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:54.819 15:42:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9b67d1f1-ff5a-4052-8f16-0123d378e484 00:07:54.819 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:55.078 [2024-10-01 15:42:05.191285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.078 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:55.336 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2282196 00:07:55.336 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:55.336 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:56.281 15:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9b67d1f1-ff5a-4052-8f16-0123d378e484 MY_SNAPSHOT 00:07:56.539 15:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9ec1e847-3287-430d-8889-651a9e48e12e 00:07:56.539 15:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9b67d1f1-ff5a-4052-8f16-0123d378e484 30 00:07:56.798 15:42:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9ec1e847-3287-430d-8889-651a9e48e12e MY_CLONE 00:07:57.056 15:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=fe34c2c2-1544-4e31-88e4-493164d2fdf2 00:07:57.056 15:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate fe34c2c2-1544-4e31-88e4-493164d2fdf2 00:07:57.623 15:42:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2282196 00:08:05.741 Initializing NVMe Controllers 00:08:05.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:05.741 Controller IO queue size 128, less than required. 00:08:05.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:05.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:05.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:05.741 Initialization complete. Launching workers. 00:08:05.741 ======================================================== 00:08:05.741 Latency(us) 00:08:05.741 Device Information : IOPS MiB/s Average min max 00:08:05.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12269.35 47.93 10439.10 2114.83 100950.70 00:08:05.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12157.05 47.49 10530.31 3585.05 41240.85 00:08:05.741 ======================================================== 00:08:05.741 Total : 24426.39 95.42 10484.50 2114.83 100950.70 00:08:05.741 00:08:05.741 15:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:05.741 15:42:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9b67d1f1-ff5a-4052-8f16-0123d378e484 00:08:05.999 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7db01d6e-084c-4ad0-b6f9-50f740518ec1 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.256 rmmod nvme_tcp 00:08:06.256 rmmod nvme_fabrics 00:08:06.256 rmmod nvme_keyring 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 2281643 ']' 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 2281643 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2281643 ']' 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2281643 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:06.256 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.514 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2281643 00:08:06.514 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.514 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.514 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2281643' 00:08:06.514 killing process with pid 2281643 00:08:06.514 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2281643 00:08:06.514 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2281643 00:08:06.772 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:06.772 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:06.772 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:06.772 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:06.772 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:08:06.772 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:06.772 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:08:06.772 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.772 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:06.772 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.772 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.772 15:42:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.673 15:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.673 00:08:08.673 real 0m22.618s 00:08:08.674 user 1m5.031s 00:08:08.674 sys 0m7.566s 00:08:08.674 15:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.674 15:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.674 ************************************ 00:08:08.674 END TEST nvmf_lvol 00:08:08.674 ************************************ 00:08:08.674 15:42:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.674 15:42:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:08.674 15:42:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.674 15:42:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.936 ************************************ 00:08:08.936 START TEST nvmf_lvs_grow 00:08:08.936 ************************************ 00:08:08.936 15:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.936 * Looking for test storage... 00:08:08.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.936 15:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:08.936 15:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:08.936 15:42:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:08.936 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:08.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.937 --rc genhtml_branch_coverage=1 00:08:08.937 --rc genhtml_function_coverage=1 00:08:08.937 --rc genhtml_legend=1 00:08:08.937 --rc geninfo_all_blocks=1 00:08:08.937 --rc geninfo_unexecuted_blocks=1 00:08:08.937 00:08:08.937 ' 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:08.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.937 --rc genhtml_branch_coverage=1 00:08:08.937 --rc genhtml_function_coverage=1 00:08:08.937 --rc genhtml_legend=1 00:08:08.937 --rc geninfo_all_blocks=1 00:08:08.937 --rc geninfo_unexecuted_blocks=1 00:08:08.937 00:08:08.937 ' 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:08.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.937 --rc genhtml_branch_coverage=1 00:08:08.937 --rc genhtml_function_coverage=1 00:08:08.937 --rc genhtml_legend=1 00:08:08.937 --rc geninfo_all_blocks=1 00:08:08.937 --rc geninfo_unexecuted_blocks=1 00:08:08.937 00:08:08.937 ' 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:08.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.937 --rc genhtml_branch_coverage=1 00:08:08.937 --rc genhtml_function_coverage=1 00:08:08.937 --rc genhtml_legend=1 00:08:08.937 --rc geninfo_all_blocks=1 00:08:08.937 --rc geninfo_unexecuted_blocks=1 00:08:08.937 00:08:08.937 ' 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:08.937 15:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:15.649 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:15.649 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:15.649 Found net devices under 0000:86:00.0: cvl_0_0 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:15.649 Found net devices under 0000:86:00.1: cvl_0_1 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.649 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.650 15:42:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:08:15.650 00:08:15.650 --- 10.0.0.2 ping statistics --- 00:08:15.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.650 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:08:15.650 00:08:15.650 --- 10.0.0.1 ping statistics --- 00:08:15.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.650 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=2287969 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 2287969 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2287969 ']' 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.650 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.650 [2024-10-01 15:42:25.158347] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:15.650 [2024-10-01 15:42:25.158394] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.650 [2024-10-01 15:42:25.228528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.650 [2024-10-01 15:42:25.307180] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.650 [2024-10-01 15:42:25.307216] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.650 [2024-10-01 15:42:25.307223] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.650 [2024-10-01 15:42:25.307229] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.650 [2024-10-01 15:42:25.307234] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.650 [2024-10-01 15:42:25.307251] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.908 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.908 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:15.908 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:15.908 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:15.908 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.908 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.908 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:16.167 [2024-10-01 15:42:26.228388] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.167 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:16.167 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.167 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.167 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.167 ************************************ 00:08:16.167 START TEST lvs_grow_clean 00:08:16.167 ************************************ 00:08:16.167 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:16.167 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:16.167 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:16.167 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:16.167 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:16.167 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:16.167 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:16.167 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:16.167 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:16.167 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:16.426 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:16.426 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:16.684 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2f350439-5741-4619-8725-a9e59dae98db 00:08:16.684 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f350439-5741-4619-8725-a9e59dae98db 00:08:16.684 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:16.684 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:16.684 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:16.684 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2f350439-5741-4619-8725-a9e59dae98db lvol 150 00:08:16.942 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0c33c57d-a7be-4bb3-9185-b5e9b80c9b3c 00:08:16.942 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:16.942 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:17.200 [2024-10-01 15:42:27.226726] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:17.200 [2024-10-01 15:42:27.226773] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:17.200 true 00:08:17.200 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f350439-5741-4619-8725-a9e59dae98db 00:08:17.200 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:17.458 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:17.458 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:17.458 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0c33c57d-a7be-4bb3-9185-b5e9b80c9b3c 00:08:17.717 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:17.976 [2024-10-01 15:42:27.960935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.976 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.976 15:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2288480 00:08:17.976 15:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:17.976 15:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:17.976 15:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2288480 /var/tmp/bdevperf.sock 00:08:17.976 15:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2288480 ']' 00:08:17.976 15:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:17.976 15:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.976 15:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:17.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:17.976 15:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.976 15:42:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:18.235 [2024-10-01 15:42:28.204230] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:18.235 [2024-10-01 15:42:28.204277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2288480 ] 00:08:18.235 [2024-10-01 15:42:28.271595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.235 [2024-10-01 15:42:28.349152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.171 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.171 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:19.171 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:19.171 Nvme0n1 00:08:19.171 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:19.431 [ 00:08:19.431 { 00:08:19.431 "name": "Nvme0n1", 00:08:19.431 "aliases": [ 00:08:19.432 "0c33c57d-a7be-4bb3-9185-b5e9b80c9b3c" 00:08:19.432 ], 00:08:19.432 "product_name": "NVMe disk", 00:08:19.432 "block_size": 4096, 00:08:19.432 "num_blocks": 38912, 00:08:19.432 "uuid": "0c33c57d-a7be-4bb3-9185-b5e9b80c9b3c", 00:08:19.432 "numa_id": 1, 00:08:19.432 "assigned_rate_limits": { 00:08:19.432 "rw_ios_per_sec": 0, 00:08:19.432 "rw_mbytes_per_sec": 0, 00:08:19.432 "r_mbytes_per_sec": 0, 00:08:19.432 "w_mbytes_per_sec": 0 00:08:19.432 }, 00:08:19.432 "claimed": false, 00:08:19.432 "zoned": false, 00:08:19.432 "supported_io_types": { 00:08:19.432 "read": true, 00:08:19.432 "write": true, 00:08:19.432 "unmap": true, 00:08:19.432 "flush": true, 00:08:19.432 "reset": true, 00:08:19.432 "nvme_admin": true, 00:08:19.432 "nvme_io": true, 00:08:19.433 "nvme_io_md": false, 00:08:19.433 "write_zeroes": true, 00:08:19.433 "zcopy": false, 00:08:19.433 "get_zone_info": false, 00:08:19.433 "zone_management": false, 00:08:19.433 "zone_append": false, 00:08:19.433 "compare": true, 00:08:19.433 "compare_and_write": true, 00:08:19.433 "abort": true, 00:08:19.433 "seek_hole": false, 00:08:19.433 "seek_data": false, 00:08:19.433 "copy": true, 00:08:19.433 "nvme_iov_md": false 00:08:19.433 }, 00:08:19.433 "memory_domains": [ 00:08:19.433 { 00:08:19.433 "dma_device_id": "system", 00:08:19.433 "dma_device_type": 1 00:08:19.433 } 00:08:19.433 ], 00:08:19.433 "driver_specific": { 00:08:19.433 "nvme": [ 00:08:19.433 { 00:08:19.433 "trid": { 00:08:19.433 "trtype": "TCP", 00:08:19.433 "adrfam": "IPv4", 00:08:19.433 "traddr": "10.0.0.2", 00:08:19.433 "trsvcid": "4420", 00:08:19.433 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:19.433 }, 00:08:19.433 "ctrlr_data": { 00:08:19.433 "cntlid": 1, 00:08:19.433 "vendor_id": "0x8086", 00:08:19.433 "model_number": "SPDK bdev Controller", 00:08:19.434 "serial_number": "SPDK0", 00:08:19.434 "firmware_revision": "25.01", 00:08:19.434 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:19.434 "oacs": { 00:08:19.434 "security": 0, 00:08:19.434 "format": 0, 00:08:19.434 "firmware": 0, 00:08:19.434 "ns_manage": 0 00:08:19.434 }, 00:08:19.434 "multi_ctrlr": true, 00:08:19.434 "ana_reporting": false 00:08:19.434 }, 00:08:19.434 "vs": { 00:08:19.434 "nvme_version": "1.3" 00:08:19.434 }, 00:08:19.434 "ns_data": { 00:08:19.434 "id": 1, 00:08:19.434 "can_share": true 00:08:19.435 } 00:08:19.435 } 00:08:19.435 ], 00:08:19.435 "mp_policy": "active_passive" 00:08:19.435 } 00:08:19.435 } 00:08:19.435 ] 00:08:19.435 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2288713 00:08:19.435 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:19.435 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:19.435 Running I/O for 10 seconds... 00:08:20.813 Latency(us) 00:08:20.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.813 Nvme0n1 : 1.00 22690.00 88.63 0.00 0.00 0.00 0.00 0.00 00:08:20.813 =================================================================================================================== 00:08:20.813 Total : 22690.00 88.63 0.00 0.00 0.00 0.00 0.00 00:08:20.813 00:08:21.380 15:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2f350439-5741-4619-8725-a9e59dae98db 00:08:21.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.639 Nvme0n1 : 2.00 23095.50 90.22 0.00 0.00 0.00 0.00 0.00 00:08:21.639 =================================================================================================================== 00:08:21.639 Total : 23095.50 90.22 0.00 0.00 0.00 0.00 0.00 00:08:21.639 00:08:21.639 true 00:08:21.639 15:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f350439-5741-4619-8725-a9e59dae98db 00:08:21.639 15:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:21.898 15:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:21.898 15:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:21.898 15:42:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2288713 00:08:22.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.465 Nvme0n1 : 3.00 23210.00 90.66 0.00 0.00 0.00 0.00 0.00 00:08:22.465 =================================================================================================================== 00:08:22.465 Total : 23210.00 90.66 0.00 0.00 0.00 0.00 0.00 00:08:22.465 00:08:23.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.842 Nvme0n1 : 4.00 23312.50 91.06 0.00 0.00 0.00 0.00 0.00 00:08:23.842 =================================================================================================================== 00:08:23.842 Total : 23312.50 91.06 0.00 0.00 0.00 0.00 0.00 00:08:23.842 00:08:24.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.778 Nvme0n1 : 5.00 23388.00 91.36 0.00 0.00 0.00 0.00 0.00 00:08:24.778 =================================================================================================================== 00:08:24.778 Total : 23388.00 91.36 0.00 0.00 0.00 0.00 0.00 00:08:24.778 00:08:25.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.750 Nvme0n1 : 6.00 23447.50 91.59 0.00 0.00 0.00 0.00 0.00 00:08:25.750 =================================================================================================================== 00:08:25.750 Total : 23447.50 91.59 0.00 0.00 0.00 0.00 0.00 00:08:25.750 00:08:26.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.683 Nvme0n1 : 7.00 23493.71 91.77 0.00 0.00 0.00 0.00 0.00 00:08:26.683 =================================================================================================================== 00:08:26.683 Total : 23493.71 91.77 0.00 0.00 0.00 0.00 0.00 00:08:26.683 00:08:27.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.619 Nvme0n1 : 8.00 23521.00 91.88 0.00 0.00 0.00 0.00 0.00 00:08:27.619 =================================================================================================================== 00:08:27.619 Total : 23521.00 91.88 0.00 0.00 0.00 0.00 0.00 00:08:27.619 00:08:28.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.557 Nvme0n1 : 9.00 23550.56 91.99 0.00 0.00 0.00 0.00 0.00 00:08:28.557 =================================================================================================================== 00:08:28.557 Total : 23550.56 91.99 0.00 0.00 0.00 0.00 0.00 00:08:28.557 00:08:29.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.491 Nvme0n1 : 10.00 23578.70 92.10 0.00 0.00 0.00 0.00 0.00 00:08:29.491 =================================================================================================================== 00:08:29.491 Total : 23578.70 92.10 0.00 0.00 0.00 0.00 0.00 00:08:29.491 00:08:29.491 00:08:29.491 Latency(us) 00:08:29.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.491 Nvme0n1 : 10.00 23575.04 92.09 0.00 0.00 5426.14 3167.57 14417.92 00:08:29.491 =================================================================================================================== 00:08:29.491 Total : 23575.04 92.09 0.00 0.00 5426.14 3167.57 14417.92 00:08:29.491 { 00:08:29.491 "results": [ 00:08:29.491 { 00:08:29.491 "job": "Nvme0n1", 00:08:29.491 "core_mask": "0x2", 00:08:29.491 "workload": "randwrite", 00:08:29.491 "status": "finished", 00:08:29.491 "queue_depth": 128, 00:08:29.491 "io_size": 4096, 00:08:29.491 "runtime": 10.00431, 00:08:29.491 "iops": 23575.03915812285, 00:08:29.491 "mibps": 92.08999671141738, 00:08:29.491 "io_failed": 0, 00:08:29.491 "io_timeout": 0, 00:08:29.491 "avg_latency_us": 5426.142393543004, 00:08:29.491 "min_latency_us": 3167.5733333333333, 00:08:29.491 "max_latency_us": 14417.92 00:08:29.491 } 00:08:29.491 ], 00:08:29.491 "core_count": 1 00:08:29.491 } 00:08:29.491 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2288480 00:08:29.491 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2288480 ']' 00:08:29.491 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2288480 00:08:29.491 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:29.491 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.491 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2288480 00:08:29.750 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:29.750 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:29.750 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2288480' 00:08:29.750 killing process with pid 2288480 00:08:29.750 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2288480 00:08:29.750 Received shutdown signal, test time was about 10.000000 seconds 00:08:29.750 00:08:29.750 Latency(us) 00:08:29.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.750 =================================================================================================================== 00:08:29.750 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:29.750 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2288480 00:08:29.750 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.008 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:30.268 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f350439-5741-4619-8725-a9e59dae98db 00:08:30.268 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:30.526 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:30.526 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:30.526 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:30.527 [2024-10-01 15:42:40.649502] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:30.527 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f350439-5741-4619-8725-a9e59dae98db 00:08:30.527 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:30.527 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f350439-5741-4619-8725-a9e59dae98db 00:08:30.527 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.527 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.527 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.527 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.527 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.527 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.527 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.527 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:30.527 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f350439-5741-4619-8725-a9e59dae98db 00:08:30.786 request: 00:08:30.786 { 00:08:30.786 "uuid": "2f350439-5741-4619-8725-a9e59dae98db", 00:08:30.786 "method": "bdev_lvol_get_lvstores", 00:08:30.786 "req_id": 1 00:08:30.786 } 00:08:30.786 Got JSON-RPC error response 00:08:30.786 response: 00:08:30.786 { 00:08:30.786 "code": -19, 00:08:30.786 "message": "No such device" 00:08:30.786 } 00:08:30.786 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:30.786 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:30.786 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:30.786 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:30.786 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.045 aio_bdev 00:08:31.045 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0c33c57d-a7be-4bb3-9185-b5e9b80c9b3c 00:08:31.045 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=0c33c57d-a7be-4bb3-9185-b5e9b80c9b3c 00:08:31.045 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:31.045 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:31.045 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:31.045 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:31.045 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:31.304 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0c33c57d-a7be-4bb3-9185-b5e9b80c9b3c -t 2000 00:08:31.304 [ 00:08:31.304 { 00:08:31.304 "name": "0c33c57d-a7be-4bb3-9185-b5e9b80c9b3c", 00:08:31.304 "aliases": [ 00:08:31.304 "lvs/lvol" 00:08:31.304 ], 00:08:31.304 "product_name": "Logical Volume", 00:08:31.304 "block_size": 4096, 00:08:31.304 "num_blocks": 38912, 00:08:31.304 "uuid": "0c33c57d-a7be-4bb3-9185-b5e9b80c9b3c", 00:08:31.304 "assigned_rate_limits": { 00:08:31.304 "rw_ios_per_sec": 0, 00:08:31.304 "rw_mbytes_per_sec": 0, 00:08:31.304 "r_mbytes_per_sec": 0, 00:08:31.304 "w_mbytes_per_sec": 0 00:08:31.304 }, 00:08:31.304 "claimed": false, 00:08:31.304 "zoned": false, 00:08:31.304 "supported_io_types": { 00:08:31.304 "read": true, 00:08:31.304 "write": true, 00:08:31.304 "unmap": true, 00:08:31.304 "flush": false, 00:08:31.304 "reset": true, 00:08:31.304 "nvme_admin": false, 00:08:31.304 "nvme_io": false, 00:08:31.304 "nvme_io_md": false, 00:08:31.304 "write_zeroes": true, 00:08:31.304 "zcopy": false, 00:08:31.304 "get_zone_info": false, 00:08:31.304 "zone_management": false, 00:08:31.304 "zone_append": false, 00:08:31.304 "compare": false, 00:08:31.304 "compare_and_write": false, 00:08:31.304 "abort": false, 00:08:31.304 "seek_hole": true, 00:08:31.304 "seek_data": true, 00:08:31.304 "copy": false, 00:08:31.304 "nvme_iov_md": false 00:08:31.304 }, 00:08:31.304 "driver_specific": { 00:08:31.304 "lvol": { 00:08:31.304 "lvol_store_uuid": "2f350439-5741-4619-8725-a9e59dae98db", 00:08:31.304 "base_bdev": "aio_bdev", 00:08:31.304 "thin_provision": false, 00:08:31.304 "num_allocated_clusters": 38, 00:08:31.304 "snapshot": false, 00:08:31.304 "clone": false, 00:08:31.304 "esnap_clone": false 00:08:31.304 } 00:08:31.304 } 00:08:31.304 } 00:08:31.304 ] 00:08:31.304 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:31.304 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f350439-5741-4619-8725-a9e59dae98db 00:08:31.304 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:31.564 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:31.564 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f350439-5741-4619-8725-a9e59dae98db 00:08:31.564 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:31.823 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:31.823 15:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0c33c57d-a7be-4bb3-9185-b5e9b80c9b3c 00:08:31.823 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2f350439-5741-4619-8725-a9e59dae98db 00:08:32.081 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.340 00:08:32.340 real 0m16.129s 00:08:32.340 user 0m15.790s 00:08:32.340 sys 0m1.523s 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:32.340 ************************************ 00:08:32.340 END TEST lvs_grow_clean 00:08:32.340 ************************************ 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.340 ************************************ 00:08:32.340 START TEST lvs_grow_dirty 00:08:32.340 ************************************ 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.340 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.598 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:32.598 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:32.857 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d59405fe-e46c-4937-ad48-76a631cb45e5 00:08:32.857 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d59405fe-e46c-4937-ad48-76a631cb45e5 00:08:32.857 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:33.116 15:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:33.116 15:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:33.116 15:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d59405fe-e46c-4937-ad48-76a631cb45e5 lvol 150 00:08:33.116 15:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3b8e9595-ef7d-407e-9140-7eaacf241e44 00:08:33.116 15:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:33.116 15:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:33.374 [2024-10-01 15:42:43.434722] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:33.374 [2024-10-01 15:42:43.434769] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:33.374 true 00:08:33.374 15:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:33.374 15:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d59405fe-e46c-4937-ad48-76a631cb45e5 00:08:33.633 15:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:33.633 15:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:33.633 15:42:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3b8e9595-ef7d-407e-9140-7eaacf241e44 00:08:33.891 15:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:34.149 [2024-10-01 15:42:44.172925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.149 15:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.408 15:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2291303 00:08:34.408 15:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.408 15:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2291303 /var/tmp/bdevperf.sock 00:08:34.408 15:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2291303 ']' 00:08:34.408 15:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:34.408 15:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:34.408 15:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.408 15:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:34.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:34.408 15:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.408 15:42:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:34.408 [2024-10-01 15:42:44.409556] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:34.408 [2024-10-01 15:42:44.409605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291303 ] 00:08:34.408 [2024-10-01 15:42:44.477490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.408 [2024-10-01 15:42:44.555710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.345 15:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.345 15:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:35.345 15:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:35.345 Nvme0n1 00:08:35.345 15:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:35.604 [ 00:08:35.604 { 00:08:35.604 "name": "Nvme0n1", 00:08:35.604 "aliases": [ 00:08:35.604 "3b8e9595-ef7d-407e-9140-7eaacf241e44" 00:08:35.604 ], 00:08:35.604 "product_name": "NVMe disk", 00:08:35.604 "block_size": 4096, 00:08:35.604 "num_blocks": 38912, 00:08:35.604 "uuid": "3b8e9595-ef7d-407e-9140-7eaacf241e44", 00:08:35.604 "numa_id": 1, 00:08:35.604 "assigned_rate_limits": { 00:08:35.604 "rw_ios_per_sec": 0, 00:08:35.604 "rw_mbytes_per_sec": 0, 00:08:35.604 "r_mbytes_per_sec": 0, 00:08:35.604 "w_mbytes_per_sec": 0 00:08:35.604 }, 00:08:35.604 "claimed": false, 00:08:35.604 "zoned": false, 00:08:35.604 "supported_io_types": { 00:08:35.605 "read": true, 00:08:35.605 "write": true, 00:08:35.605 "unmap": true, 00:08:35.605 "flush": true, 00:08:35.605 "reset": true, 00:08:35.605 "nvme_admin": true, 00:08:35.605 "nvme_io": true, 00:08:35.605 "nvme_io_md": false, 00:08:35.605 "write_zeroes": true, 00:08:35.605 "zcopy": false, 00:08:35.605 "get_zone_info": false, 00:08:35.605 "zone_management": false, 00:08:35.605 "zone_append": false, 00:08:35.605 "compare": true, 00:08:35.605 "compare_and_write": true, 00:08:35.605 "abort": true, 00:08:35.605 "seek_hole": false, 00:08:35.605 "seek_data": false, 00:08:35.605 "copy": true, 00:08:35.605 "nvme_iov_md": false 00:08:35.605 }, 00:08:35.605 "memory_domains": [ 00:08:35.605 { 00:08:35.605 "dma_device_id": "system", 00:08:35.605 "dma_device_type": 1 00:08:35.605 } 00:08:35.605 ], 00:08:35.605 "driver_specific": { 00:08:35.605 "nvme": [ 00:08:35.605 { 00:08:35.605 "trid": { 00:08:35.605 "trtype": "TCP", 00:08:35.605 "adrfam": "IPv4", 00:08:35.605 "traddr": "10.0.0.2", 00:08:35.605 "trsvcid": "4420", 00:08:35.605 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:35.605 }, 00:08:35.605 "ctrlr_data": { 00:08:35.605 "cntlid": 1, 00:08:35.605 "vendor_id": "0x8086", 00:08:35.605 "model_number": "SPDK bdev Controller", 00:08:35.605 "serial_number": "SPDK0", 00:08:35.605 "firmware_revision": "25.01", 00:08:35.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:35.605 "oacs": { 00:08:35.605 "security": 0, 00:08:35.605 "format": 0, 00:08:35.605 "firmware": 0, 00:08:35.605 "ns_manage": 0 00:08:35.605 }, 00:08:35.605 "multi_ctrlr": true, 00:08:35.605 "ana_reporting": false 00:08:35.605 }, 00:08:35.605 "vs": { 00:08:35.605 "nvme_version": "1.3" 00:08:35.605 }, 00:08:35.605 "ns_data": { 00:08:35.605 "id": 1, 00:08:35.605 "can_share": true 00:08:35.605 } 00:08:35.605 } 00:08:35.605 ], 00:08:35.605 "mp_policy": "active_passive" 00:08:35.605 } 00:08:35.605 } 00:08:35.605 ] 00:08:35.605 15:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2291535 00:08:35.605 15:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:35.605 15:42:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:35.605 Running I/O for 10 seconds... 00:08:36.982 Latency(us) 00:08:36.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.982 Nvme0n1 : 1.00 23267.00 90.89 0.00 0.00 0.00 0.00 0.00 00:08:36.982 =================================================================================================================== 00:08:36.982 Total : 23267.00 90.89 0.00 0.00 0.00 0.00 0.00 00:08:36.982 00:08:37.549 15:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d59405fe-e46c-4937-ad48-76a631cb45e5 00:08:37.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.808 Nvme0n1 : 2.00 23423.50 91.50 0.00 0.00 0.00 0.00 0.00 00:08:37.808 =================================================================================================================== 00:08:37.808 Total : 23423.50 91.50 0.00 0.00 0.00 0.00 0.00 00:08:37.808 00:08:37.808 true 00:08:37.808 15:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d59405fe-e46c-4937-ad48-76a631cb45e5 00:08:37.808 15:42:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:38.067 15:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:38.067 15:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:38.067 15:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2291535 00:08:38.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.634 Nvme0n1 : 3.00 23448.33 91.60 0.00 0.00 0.00 0.00 0.00 00:08:38.634 =================================================================================================================== 00:08:38.634 Total : 23448.33 91.60 0.00 0.00 0.00 0.00 0.00 00:08:38.634 00:08:40.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.011 Nvme0n1 : 4.00 23527.00 91.90 0.00 0.00 0.00 0.00 0.00 00:08:40.011 =================================================================================================================== 00:08:40.011 Total : 23527.00 91.90 0.00 0.00 0.00 0.00 0.00 00:08:40.011 00:08:40.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.945 Nvme0n1 : 5.00 23453.00 91.61 0.00 0.00 0.00 0.00 0.00 00:08:40.945 =================================================================================================================== 00:08:40.945 Total : 23453.00 91.61 0.00 0.00 0.00 0.00 0.00 00:08:40.945 00:08:41.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.880 Nvme0n1 : 6.00 23515.33 91.86 0.00 0.00 0.00 0.00 0.00 00:08:41.880 =================================================================================================================== 00:08:41.880 Total : 23515.33 91.86 0.00 0.00 0.00 0.00 0.00 00:08:41.880 00:08:42.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.814 Nvme0n1 : 7.00 23558.57 92.03 0.00 0.00 0.00 0.00 0.00 00:08:42.814 =================================================================================================================== 00:08:42.814 Total : 23558.57 92.03 0.00 0.00 0.00 0.00 0.00 00:08:42.814 00:08:43.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.747 Nvme0n1 : 8.00 23579.62 92.11 0.00 0.00 0.00 0.00 0.00 00:08:43.747 =================================================================================================================== 00:08:43.747 Total : 23579.62 92.11 0.00 0.00 0.00 0.00 0.00 00:08:43.747 00:08:44.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.682 Nvme0n1 : 9.00 23613.22 92.24 0.00 0.00 0.00 0.00 0.00 00:08:44.682 =================================================================================================================== 00:08:44.682 Total : 23613.22 92.24 0.00 0.00 0.00 0.00 0.00 00:08:44.682 00:08:45.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.617 Nvme0n1 : 10.00 23634.90 92.32 0.00 0.00 0.00 0.00 0.00 00:08:45.617 =================================================================================================================== 00:08:45.617 Total : 23634.90 92.32 0.00 0.00 0.00 0.00 0.00 00:08:45.617 00:08:45.617 00:08:45.617 Latency(us) 00:08:45.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.617 Nvme0n1 : 10.00 23638.11 92.34 0.00 0.00 5411.62 3229.99 11484.40 00:08:45.617 =================================================================================================================== 00:08:45.617 Total : 23638.11 92.34 0.00 0.00 5411.62 3229.99 11484.40 00:08:45.617 { 00:08:45.617 "results": [ 00:08:45.617 { 00:08:45.617 "job": "Nvme0n1", 00:08:45.617 "core_mask": "0x2", 00:08:45.617 "workload": "randwrite", 00:08:45.617 "status": "finished", 00:08:45.617 "queue_depth": 128, 00:08:45.617 "io_size": 4096, 00:08:45.617 "runtime": 10.004057, 00:08:45.617 "iops": 23638.110018765386, 00:08:45.617 "mibps": 92.33636726080229, 00:08:45.617 "io_failed": 0, 00:08:45.617 "io_timeout": 0, 00:08:45.617 "avg_latency_us": 5411.617796266102, 00:08:45.617 "min_latency_us": 3229.9885714285715, 00:08:45.617 "max_latency_us": 11484.40380952381 00:08:45.617 } 00:08:45.617 ], 00:08:45.617 "core_count": 1 00:08:45.617 } 00:08:45.875 15:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2291303 00:08:45.875 15:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2291303 ']' 00:08:45.875 15:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2291303 00:08:45.875 15:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:45.875 15:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.875 15:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2291303 00:08:45.875 15:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:45.875 15:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:45.875 15:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2291303' 00:08:45.875 killing process with pid 2291303 00:08:45.875 15:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2291303 00:08:45.875 Received shutdown signal, test time was about 10.000000 seconds 00:08:45.875 00:08:45.875 Latency(us) 00:08:45.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.875 =================================================================================================================== 00:08:45.875 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:45.875 15:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2291303 00:08:45.875 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.135 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:46.395 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d59405fe-e46c-4937-ad48-76a631cb45e5 00:08:46.395 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2287969 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2287969 00:08:46.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2287969 Killed "${NVMF_APP[@]}" "$@" 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=2293371 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 2293371 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2293371 ']' 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.653 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:46.654 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.654 [2024-10-01 15:42:56.775248] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:46.654 [2024-10-01 15:42:56.775293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.912 [2024-10-01 15:42:56.848215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.912 [2024-10-01 15:42:56.925917] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.912 [2024-10-01 15:42:56.925951] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.912 [2024-10-01 15:42:56.925958] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.912 [2024-10-01 15:42:56.925964] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.912 [2024-10-01 15:42:56.925969] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.912 [2024-10-01 15:42:56.925986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.478 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.478 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:47.478 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:47.478 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:47.478 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:47.478 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.478 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.737 [2024-10-01 15:42:57.804281] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:47.737 [2024-10-01 15:42:57.804376] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:47.737 [2024-10-01 15:42:57.804402] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:47.737 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:47.737 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3b8e9595-ef7d-407e-9140-7eaacf241e44 00:08:47.737 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=3b8e9595-ef7d-407e-9140-7eaacf241e44 00:08:47.737 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:47.737 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:47.737 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:47.737 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:47.737 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.995 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3b8e9595-ef7d-407e-9140-7eaacf241e44 -t 2000 00:08:48.254 [ 00:08:48.254 { 00:08:48.254 "name": "3b8e9595-ef7d-407e-9140-7eaacf241e44", 00:08:48.254 "aliases": [ 00:08:48.254 "lvs/lvol" 00:08:48.254 ], 00:08:48.254 "product_name": "Logical Volume", 00:08:48.254 "block_size": 4096, 00:08:48.254 "num_blocks": 38912, 00:08:48.254 "uuid": "3b8e9595-ef7d-407e-9140-7eaacf241e44", 00:08:48.254 "assigned_rate_limits": { 00:08:48.254 "rw_ios_per_sec": 0, 00:08:48.254 "rw_mbytes_per_sec": 0, 00:08:48.254 "r_mbytes_per_sec": 0, 00:08:48.254 "w_mbytes_per_sec": 0 00:08:48.254 }, 00:08:48.254 "claimed": false, 00:08:48.254 "zoned": false, 00:08:48.254 "supported_io_types": { 00:08:48.254 "read": true, 00:08:48.254 "write": true, 00:08:48.254 "unmap": true, 00:08:48.254 "flush": false, 00:08:48.254 "reset": true, 00:08:48.254 "nvme_admin": false, 00:08:48.254 "nvme_io": false, 00:08:48.254 "nvme_io_md": false, 00:08:48.254 "write_zeroes": true, 00:08:48.254 "zcopy": false, 00:08:48.254 "get_zone_info": false, 00:08:48.254 "zone_management": false, 00:08:48.254 "zone_append": false, 00:08:48.254 "compare": false, 00:08:48.254 "compare_and_write": false, 00:08:48.254 "abort": false, 00:08:48.254 "seek_hole": true, 00:08:48.254 "seek_data": true, 00:08:48.254 "copy": false, 00:08:48.254 "nvme_iov_md": false 00:08:48.254 }, 00:08:48.254 "driver_specific": { 00:08:48.254 "lvol": { 00:08:48.254 "lvol_store_uuid": "d59405fe-e46c-4937-ad48-76a631cb45e5", 00:08:48.254 "base_bdev": "aio_bdev", 00:08:48.254 "thin_provision": false, 00:08:48.254 "num_allocated_clusters": 38, 00:08:48.254 "snapshot": false, 00:08:48.254 "clone": false, 00:08:48.254 "esnap_clone": false 00:08:48.254 } 00:08:48.254 } 00:08:48.254 } 00:08:48.254 ] 00:08:48.254 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:48.254 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d59405fe-e46c-4937-ad48-76a631cb45e5 00:08:48.254 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:48.254 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:48.254 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d59405fe-e46c-4937-ad48-76a631cb45e5 00:08:48.254 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:48.512 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:48.512 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.771 [2024-10-01 15:42:58.741304] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d59405fe-e46c-4937-ad48-76a631cb45e5 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d59405fe-e46c-4937-ad48-76a631cb45e5 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d59405fe-e46c-4937-ad48-76a631cb45e5 00:08:48.771 request: 00:08:48.771 { 00:08:48.771 "uuid": "d59405fe-e46c-4937-ad48-76a631cb45e5", 00:08:48.771 "method": "bdev_lvol_get_lvstores", 00:08:48.771 "req_id": 1 00:08:48.771 } 00:08:48.771 Got JSON-RPC error response 00:08:48.771 response: 00:08:48.771 { 00:08:48.771 "code": -19, 00:08:48.771 "message": "No such device" 00:08:48.771 } 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:48.771 15:42:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:49.032 aio_bdev 00:08:49.032 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3b8e9595-ef7d-407e-9140-7eaacf241e44 00:08:49.032 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=3b8e9595-ef7d-407e-9140-7eaacf241e44 00:08:49.032 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:49.032 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:49.032 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:49.032 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:49.032 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:49.290 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3b8e9595-ef7d-407e-9140-7eaacf241e44 -t 2000 00:08:49.549 [ 00:08:49.549 { 00:08:49.549 "name": "3b8e9595-ef7d-407e-9140-7eaacf241e44", 00:08:49.549 "aliases": [ 00:08:49.549 "lvs/lvol" 00:08:49.549 ], 00:08:49.549 "product_name": "Logical Volume", 00:08:49.549 "block_size": 4096, 00:08:49.549 "num_blocks": 38912, 00:08:49.549 "uuid": "3b8e9595-ef7d-407e-9140-7eaacf241e44", 00:08:49.549 "assigned_rate_limits": { 00:08:49.549 "rw_ios_per_sec": 0, 00:08:49.549 "rw_mbytes_per_sec": 0, 00:08:49.549 "r_mbytes_per_sec": 0, 00:08:49.549 "w_mbytes_per_sec": 0 00:08:49.549 }, 00:08:49.549 "claimed": false, 00:08:49.549 "zoned": false, 00:08:49.549 "supported_io_types": { 00:08:49.549 "read": true, 00:08:49.549 "write": true, 00:08:49.549 "unmap": true, 00:08:49.549 "flush": false, 00:08:49.549 "reset": true, 00:08:49.549 "nvme_admin": false, 00:08:49.549 "nvme_io": false, 00:08:49.549 "nvme_io_md": false, 00:08:49.549 "write_zeroes": true, 00:08:49.549 "zcopy": false, 00:08:49.549 "get_zone_info": false, 00:08:49.549 "zone_management": false, 00:08:49.549 "zone_append": false, 00:08:49.549 "compare": false, 00:08:49.549 "compare_and_write": false, 00:08:49.549 "abort": false, 00:08:49.549 "seek_hole": true, 00:08:49.549 "seek_data": true, 00:08:49.549 "copy": false, 00:08:49.549 "nvme_iov_md": false 00:08:49.549 }, 00:08:49.549 "driver_specific": { 00:08:49.549 "lvol": { 00:08:49.549 "lvol_store_uuid": "d59405fe-e46c-4937-ad48-76a631cb45e5", 00:08:49.549 "base_bdev": "aio_bdev", 00:08:49.549 "thin_provision": false, 00:08:49.549 "num_allocated_clusters": 38, 00:08:49.549 "snapshot": false, 00:08:49.549 "clone": false, 00:08:49.549 "esnap_clone": false 00:08:49.549 } 00:08:49.549 } 00:08:49.549 } 00:08:49.549 ] 00:08:49.549 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:49.549 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d59405fe-e46c-4937-ad48-76a631cb45e5 00:08:49.549 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:49.549 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:49.549 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d59405fe-e46c-4937-ad48-76a631cb45e5 00:08:49.549 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:49.809 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:49.809 15:42:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3b8e9595-ef7d-407e-9140-7eaacf241e44 00:08:50.068 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d59405fe-e46c-4937-ad48-76a631cb45e5 00:08:50.327 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:50.327 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:50.586 00:08:50.586 real 0m18.044s 00:08:50.586 user 0m46.105s 00:08:50.586 sys 0m3.949s 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:50.586 ************************************ 00:08:50.586 END TEST lvs_grow_dirty 00:08:50.586 ************************************ 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:50.586 nvmf_trace.0 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.586 rmmod nvme_tcp 00:08:50.586 rmmod nvme_fabrics 00:08:50.586 rmmod nvme_keyring 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 2293371 ']' 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 2293371 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2293371 ']' 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2293371 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2293371 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2293371' 00:08:50.586 killing process with pid 2293371 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2293371 00:08:50.586 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2293371 00:08:50.846 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:50.846 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:50.846 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:50.846 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:50.846 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:08:50.846 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:50.846 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:08:50.846 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:50.846 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:50.846 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.846 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.846 15:43:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.383 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.383 00:08:53.383 real 0m44.130s 00:08:53.383 user 1m8.313s 00:08:53.383 sys 0m10.444s 00:08:53.383 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.383 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.383 ************************************ 00:08:53.383 END TEST nvmf_lvs_grow 00:08:53.383 ************************************ 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.383 ************************************ 00:08:53.383 START TEST nvmf_bdev_io_wait 00:08:53.383 ************************************ 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:53.383 * Looking for test storage... 00:08:53.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.383 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:53.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.384 --rc genhtml_branch_coverage=1 00:08:53.384 --rc genhtml_function_coverage=1 00:08:53.384 --rc genhtml_legend=1 00:08:53.384 --rc geninfo_all_blocks=1 00:08:53.384 --rc geninfo_unexecuted_blocks=1 00:08:53.384 00:08:53.384 ' 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:53.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.384 --rc genhtml_branch_coverage=1 00:08:53.384 --rc genhtml_function_coverage=1 00:08:53.384 --rc genhtml_legend=1 00:08:53.384 --rc geninfo_all_blocks=1 00:08:53.384 --rc geninfo_unexecuted_blocks=1 00:08:53.384 00:08:53.384 ' 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:53.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.384 --rc genhtml_branch_coverage=1 00:08:53.384 --rc genhtml_function_coverage=1 00:08:53.384 --rc genhtml_legend=1 00:08:53.384 --rc geninfo_all_blocks=1 00:08:53.384 --rc geninfo_unexecuted_blocks=1 00:08:53.384 00:08:53.384 ' 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:53.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.384 --rc genhtml_branch_coverage=1 00:08:53.384 --rc genhtml_function_coverage=1 00:08:53.384 --rc genhtml_legend=1 00:08:53.384 --rc geninfo_all_blocks=1 00:08:53.384 --rc geninfo_unexecuted_blocks=1 00:08:53.384 00:08:53.384 ' 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.384 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:53.385 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:53.385 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.385 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.959 15:43:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.959 15:43:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.959 15:43:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.959 15:43:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.959 15:43:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.959 15:43:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.959 15:43:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.959 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:59.960 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:59.960 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:59.960 Found net devices under 0000:86:00.0: cvl_0_0 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:59.960 Found net devices under 0000:86:00.1: cvl_0_1 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:08:59.960 00:08:59.960 --- 10.0.0.2 ping statistics --- 00:08:59.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.960 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:08:59.960 00:08:59.960 --- 10.0.0.1 ping statistics --- 00:08:59.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.960 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=2297613 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 2297613 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2297613 ']' 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.960 15:43:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.960 [2024-10-01 15:43:09.378695] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:08:59.960 [2024-10-01 15:43:09.378744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.960 [2024-10-01 15:43:09.452284] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.960 [2024-10-01 15:43:09.533580] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.961 [2024-10-01 15:43:09.533615] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.961 [2024-10-01 15:43:09.533623] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.961 [2024-10-01 15:43:09.533629] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.961 [2024-10-01 15:43:09.533634] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.961 [2024-10-01 15:43:09.533712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.961 [2024-10-01 15:43:09.533838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.961 [2024-10-01 15:43:09.533942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.961 [2024-10-01 15:43:09.533943] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.221 [2024-10-01 15:43:10.324337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.221 Malloc0 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.221 [2024-10-01 15:43:10.391376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2297713 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2297715 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:00.221 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:00.221 { 00:09:00.221 "params": { 00:09:00.221 "name": "Nvme$subsystem", 00:09:00.221 "trtype": "$TEST_TRANSPORT", 00:09:00.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.221 "adrfam": "ipv4", 00:09:00.221 "trsvcid": "$NVMF_PORT", 00:09:00.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.222 "hdgst": ${hdgst:-false}, 00:09:00.222 "ddgst": ${ddgst:-false} 00:09:00.222 }, 00:09:00.222 "method": "bdev_nvme_attach_controller" 00:09:00.222 } 00:09:00.222 EOF 00:09:00.222 )") 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2297717 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:00.222 { 00:09:00.222 "params": { 00:09:00.222 "name": "Nvme$subsystem", 00:09:00.222 "trtype": "$TEST_TRANSPORT", 00:09:00.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.222 "adrfam": "ipv4", 00:09:00.222 "trsvcid": "$NVMF_PORT", 00:09:00.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.222 "hdgst": ${hdgst:-false}, 00:09:00.222 "ddgst": ${ddgst:-false} 00:09:00.222 }, 00:09:00.222 "method": "bdev_nvme_attach_controller" 00:09:00.222 } 00:09:00.222 EOF 00:09:00.222 )") 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2297720 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:00.222 { 00:09:00.222 "params": { 00:09:00.222 "name": "Nvme$subsystem", 00:09:00.222 "trtype": "$TEST_TRANSPORT", 00:09:00.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.222 "adrfam": "ipv4", 00:09:00.222 "trsvcid": "$NVMF_PORT", 00:09:00.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.222 "hdgst": ${hdgst:-false}, 00:09:00.222 "ddgst": ${ddgst:-false} 00:09:00.222 }, 00:09:00.222 "method": "bdev_nvme_attach_controller" 00:09:00.222 } 00:09:00.222 EOF 00:09:00.222 )") 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:00.222 { 00:09:00.222 "params": { 00:09:00.222 "name": "Nvme$subsystem", 00:09:00.222 "trtype": "$TEST_TRANSPORT", 00:09:00.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.222 "adrfam": "ipv4", 00:09:00.222 "trsvcid": "$NVMF_PORT", 00:09:00.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.222 "hdgst": ${hdgst:-false}, 00:09:00.222 "ddgst": ${ddgst:-false} 00:09:00.222 }, 00:09:00.222 "method": "bdev_nvme_attach_controller" 00:09:00.222 } 00:09:00.222 EOF 00:09:00.222 )") 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2297713 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:00.222 "params": { 00:09:00.222 "name": "Nvme1", 00:09:00.222 "trtype": "tcp", 00:09:00.222 "traddr": "10.0.0.2", 00:09:00.222 "adrfam": "ipv4", 00:09:00.222 "trsvcid": "4420", 00:09:00.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.222 "hdgst": false, 00:09:00.222 "ddgst": false 00:09:00.222 }, 00:09:00.222 "method": "bdev_nvme_attach_controller" 00:09:00.222 }' 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:00.222 "params": { 00:09:00.222 "name": "Nvme1", 00:09:00.222 "trtype": "tcp", 00:09:00.222 "traddr": "10.0.0.2", 00:09:00.222 "adrfam": "ipv4", 00:09:00.222 "trsvcid": "4420", 00:09:00.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.222 "hdgst": false, 00:09:00.222 "ddgst": false 00:09:00.222 }, 00:09:00.222 "method": "bdev_nvme_attach_controller" 00:09:00.222 }' 00:09:00.222 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:00.480 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:00.480 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:00.480 "params": { 00:09:00.480 "name": "Nvme1", 00:09:00.480 "trtype": "tcp", 00:09:00.480 "traddr": "10.0.0.2", 00:09:00.480 "adrfam": "ipv4", 00:09:00.480 "trsvcid": "4420", 00:09:00.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.480 "hdgst": false, 00:09:00.480 "ddgst": false 00:09:00.480 }, 00:09:00.480 "method": "bdev_nvme_attach_controller" 00:09:00.480 }' 00:09:00.480 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:00.480 15:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:00.480 "params": { 00:09:00.480 "name": "Nvme1", 00:09:00.480 "trtype": "tcp", 00:09:00.480 "traddr": "10.0.0.2", 00:09:00.480 "adrfam": "ipv4", 00:09:00.480 "trsvcid": "4420", 00:09:00.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.480 "hdgst": false, 00:09:00.480 "ddgst": false 00:09:00.480 }, 00:09:00.480 "method": "bdev_nvme_attach_controller" 00:09:00.480 }' 00:09:00.480 [2024-10-01 15:43:10.444155] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:00.480 [2024-10-01 15:43:10.444159] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:00.480 [2024-10-01 15:43:10.444159] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:00.481 [2024-10-01 15:43:10.444205] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-01 15:43:10.444205] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-10-01 15:43:10.444206] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:00.481 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:00.481 --proc-type=auto ] 00:09:00.481 [2024-10-01 15:43:10.448135] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:00.481 [2024-10-01 15:43:10.448180] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:00.481 [2024-10-01 15:43:10.625936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.738 [2024-10-01 15:43:10.703005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:00.738 [2024-10-01 15:43:10.717537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.738 [2024-10-01 15:43:10.790489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:00.738 [2024-10-01 15:43:10.818618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.738 [2024-10-01 15:43:10.879234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.738 [2024-10-01 15:43:10.907046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:00.996 [2024-10-01 15:43:10.956300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:00.996 Running I/O for 1 seconds... 00:09:00.996 Running I/O for 1 seconds... 00:09:01.254 Running I/O for 1 seconds... 00:09:01.511 Running I/O for 1 seconds... 00:09:02.078 9303.00 IOPS, 36.34 MiB/s 00:09:02.078 Latency(us) 00:09:02.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.078 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:02.078 Nvme1n1 : 1.02 9312.17 36.38 0.00 0.00 13614.67 4244.24 24341.94 00:09:02.078 =================================================================================================================== 00:09:02.078 Total : 9312.17 36.38 0.00 0.00 13614.67 4244.24 24341.94 00:09:02.078 11690.00 IOPS, 45.66 MiB/s 00:09:02.078 Latency(us) 00:09:02.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.078 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:02.078 Nvme1n1 : 1.01 11746.92 45.89 0.00 0.00 10851.82 4649.94 17601.10 00:09:02.078 =================================================================================================================== 00:09:02.078 Total : 11746.92 45.89 0.00 0.00 10851.82 4649.94 17601.10 00:09:02.337 9331.00 IOPS, 36.45 MiB/s 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2297715 00:09:02.337 00:09:02.337 Latency(us) 00:09:02.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.337 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:02.337 Nvme1n1 : 1.00 9421.63 36.80 0.00 0.00 13554.27 3370.42 35951.18 00:09:02.337 =================================================================================================================== 00:09:02.337 Total : 9421.63 36.80 0.00 0.00 13554.27 3370.42 35951.18 00:09:02.337 252272.00 IOPS, 985.44 MiB/s 00:09:02.337 Latency(us) 00:09:02.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.337 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:02.337 Nvme1n1 : 1.00 251890.07 983.95 0.00 0.00 505.69 236.98 1513.57 00:09:02.337 =================================================================================================================== 00:09:02.337 Total : 251890.07 983.95 0.00 0.00 505.69 236.98 1513.57 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2297717 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2297720 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:02.596 rmmod nvme_tcp 00:09:02.596 rmmod nvme_fabrics 00:09:02.596 rmmod nvme_keyring 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 2297613 ']' 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 2297613 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2297613 ']' 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2297613 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.596 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2297613 00:09:02.855 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.855 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.855 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2297613' 00:09:02.855 killing process with pid 2297613 00:09:02.855 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2297613 00:09:02.855 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2297613 00:09:02.855 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:02.855 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:02.855 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:02.855 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:02.855 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:09:02.855 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:02.855 15:43:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:09:02.855 15:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:02.855 15:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:02.855 15:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.855 15:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.855 15:43:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.392 00:09:05.392 real 0m11.991s 00:09:05.392 user 0m21.322s 00:09:05.392 sys 0m6.426s 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.392 ************************************ 00:09:05.392 END TEST nvmf_bdev_io_wait 00:09:05.392 ************************************ 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.392 ************************************ 00:09:05.392 START TEST nvmf_queue_depth 00:09:05.392 ************************************ 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:05.392 * Looking for test storage... 00:09:05.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:05.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.392 --rc genhtml_branch_coverage=1 00:09:05.392 --rc genhtml_function_coverage=1 00:09:05.392 --rc genhtml_legend=1 00:09:05.392 --rc geninfo_all_blocks=1 00:09:05.392 --rc geninfo_unexecuted_blocks=1 00:09:05.392 00:09:05.392 ' 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:05.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.392 --rc genhtml_branch_coverage=1 00:09:05.392 --rc genhtml_function_coverage=1 00:09:05.392 --rc genhtml_legend=1 00:09:05.392 --rc geninfo_all_blocks=1 00:09:05.392 --rc geninfo_unexecuted_blocks=1 00:09:05.392 00:09:05.392 ' 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:05.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.392 --rc genhtml_branch_coverage=1 00:09:05.392 --rc genhtml_function_coverage=1 00:09:05.392 --rc genhtml_legend=1 00:09:05.392 --rc geninfo_all_blocks=1 00:09:05.392 --rc geninfo_unexecuted_blocks=1 00:09:05.392 00:09:05.392 ' 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:05.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.392 --rc genhtml_branch_coverage=1 00:09:05.392 --rc genhtml_function_coverage=1 00:09:05.392 --rc genhtml_legend=1 00:09:05.392 --rc geninfo_all_blocks=1 00:09:05.392 --rc geninfo_unexecuted_blocks=1 00:09:05.392 00:09:05.392 ' 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.392 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:05.393 15:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:11.964 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.964 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:11.964 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:11.964 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:11.964 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:11.964 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:11.964 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:11.964 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:11.964 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:11.965 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:11.965 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:11.965 Found net devices under 0000:86:00.0: cvl_0_0 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:11.965 Found net devices under 0000:86:00.1: cvl_0_1 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:11.965 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:11.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:09:11.966 00:09:11.966 --- 10.0.0.2 ping statistics --- 00:09:11.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.966 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:09:11.966 00:09:11.966 --- 10.0.0.1 ping statistics --- 00:09:11.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.966 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=2301729 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 2301729 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2301729 ']' 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.966 15:43:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:11.966 [2024-10-01 15:43:21.405990] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:11.966 [2024-10-01 15:43:21.406042] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.966 [2024-10-01 15:43:21.478642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.966 [2024-10-01 15:43:21.557875] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.966 [2024-10-01 15:43:21.557908] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.966 [2024-10-01 15:43:21.557916] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.966 [2024-10-01 15:43:21.557922] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.966 [2024-10-01 15:43:21.557927] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.966 [2024-10-01 15:43:21.557944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:12.225 [2024-10-01 15:43:22.274111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:12.225 Malloc0 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:12.225 [2024-10-01 15:43:22.346057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2301977 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2301977 /var/tmp/bdevperf.sock 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2301977 ']' 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:12.225 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.226 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:12.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:12.226 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.226 15:43:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:12.226 [2024-10-01 15:43:22.398164] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:12.226 [2024-10-01 15:43:22.398203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2301977 ] 00:09:12.485 [2024-10-01 15:43:22.467284] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.485 [2024-10-01 15:43:22.540291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.053 15:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.053 15:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:13.053 15:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:13.053 15:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.053 15:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.312 NVMe0n1 00:09:13.312 15:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.312 15:43:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:13.571 Running I/O for 10 seconds... 00:09:23.831 11877.00 IOPS, 46.39 MiB/s 12278.50 IOPS, 47.96 MiB/s 12282.33 IOPS, 47.98 MiB/s 12301.50 IOPS, 48.05 MiB/s 12395.40 IOPS, 48.42 MiB/s 12429.17 IOPS, 48.55 MiB/s 12424.43 IOPS, 48.53 MiB/s 12442.12 IOPS, 48.60 MiB/s 12481.33 IOPS, 48.76 MiB/s 12483.70 IOPS, 48.76 MiB/s 00:09:23.831 Latency(us) 00:09:23.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.831 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:23.831 Verification LBA range: start 0x0 length 0x4000 00:09:23.831 NVMe0n1 : 10.05 12513.89 48.88 0.00 0.00 81568.32 12857.54 52928.12 00:09:23.831 =================================================================================================================== 00:09:23.831 Total : 12513.89 48.88 0.00 0.00 81568.32 12857.54 52928.12 00:09:23.831 { 00:09:23.831 "results": [ 00:09:23.831 { 00:09:23.831 "job": "NVMe0n1", 00:09:23.831 "core_mask": "0x1", 00:09:23.831 "workload": "verify", 00:09:23.831 "status": "finished", 00:09:23.831 "verify_range": { 00:09:23.831 "start": 0, 00:09:23.831 "length": 16384 00:09:23.831 }, 00:09:23.831 "queue_depth": 1024, 00:09:23.831 "io_size": 4096, 00:09:23.831 "runtime": 10.051229, 00:09:23.831 "iops": 12513.892579703437, 00:09:23.831 "mibps": 48.88239288946655, 00:09:23.831 "io_failed": 0, 00:09:23.831 "io_timeout": 0, 00:09:23.831 "avg_latency_us": 81568.32452175756, 00:09:23.831 "min_latency_us": 12857.539047619048, 00:09:23.831 "max_latency_us": 52928.1219047619 00:09:23.831 } 00:09:23.831 ], 00:09:23.831 "core_count": 1 00:09:23.831 } 00:09:23.831 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2301977 00:09:23.831 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2301977 ']' 00:09:23.831 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2301977 00:09:23.831 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:23.831 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.831 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2301977 00:09:23.831 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.831 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.831 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2301977' 00:09:23.831 killing process with pid 2301977 00:09:23.831 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2301977 00:09:23.831 Received shutdown signal, test time was about 10.000000 seconds 00:09:23.831 00:09:23.831 Latency(us) 00:09:23.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.832 =================================================================================================================== 00:09:23.832 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2301977 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.832 rmmod nvme_tcp 00:09:23.832 rmmod nvme_fabrics 00:09:23.832 rmmod nvme_keyring 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 2301729 ']' 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 2301729 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2301729 ']' 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2301729 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.832 15:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2301729 00:09:23.832 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:23.832 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:23.832 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2301729' 00:09:23.832 killing process with pid 2301729 00:09:23.832 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2301729 00:09:23.832 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2301729 00:09:24.101 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:24.101 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:24.101 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:24.101 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:24.101 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:09:24.101 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:24.101 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:09:24.101 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.101 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:24.101 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.101 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.101 15:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.097 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.356 00:09:26.356 real 0m21.148s 00:09:26.356 user 0m25.513s 00:09:26.356 sys 0m6.116s 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:26.356 ************************************ 00:09:26.356 END TEST nvmf_queue_depth 00:09:26.356 ************************************ 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.356 ************************************ 00:09:26.356 START TEST nvmf_target_multipath 00:09:26.356 ************************************ 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:26.356 * Looking for test storage... 00:09:26.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:26.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.356 --rc genhtml_branch_coverage=1 00:09:26.356 --rc genhtml_function_coverage=1 00:09:26.356 --rc genhtml_legend=1 00:09:26.356 --rc geninfo_all_blocks=1 00:09:26.356 --rc geninfo_unexecuted_blocks=1 00:09:26.356 00:09:26.356 ' 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:26.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.356 --rc genhtml_branch_coverage=1 00:09:26.356 --rc genhtml_function_coverage=1 00:09:26.356 --rc genhtml_legend=1 00:09:26.356 --rc geninfo_all_blocks=1 00:09:26.356 --rc geninfo_unexecuted_blocks=1 00:09:26.356 00:09:26.356 ' 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:26.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.356 --rc genhtml_branch_coverage=1 00:09:26.356 --rc genhtml_function_coverage=1 00:09:26.356 --rc genhtml_legend=1 00:09:26.356 --rc geninfo_all_blocks=1 00:09:26.356 --rc geninfo_unexecuted_blocks=1 00:09:26.356 00:09:26.356 ' 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:26.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.356 --rc genhtml_branch_coverage=1 00:09:26.356 --rc genhtml_function_coverage=1 00:09:26.356 --rc genhtml_legend=1 00:09:26.356 --rc geninfo_all_blocks=1 00:09:26.356 --rc geninfo_unexecuted_blocks=1 00:09:26.356 00:09:26.356 ' 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.356 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.616 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.617 15:43:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:33.217 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:33.217 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:33.218 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:33.218 Found net devices under 0000:86:00.0: cvl_0_0 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:33.218 Found net devices under 0000:86:00.1: cvl_0_1 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:09:33.218 00:09:33.218 --- 10.0.0.2 ping statistics --- 00:09:33.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.218 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:09:33.218 00:09:33.218 --- 10.0.0.1 ping statistics --- 00:09:33.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.218 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:33.218 only one NIC for nvmf test 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.218 rmmod nvme_tcp 00:09:33.218 rmmod nvme_fabrics 00:09:33.218 rmmod nvme_keyring 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.218 15:43:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.598 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:34.857 00:09:34.857 real 0m8.427s 00:09:34.857 user 0m1.839s 00:09:34.857 sys 0m4.595s 00:09:34.857 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.857 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:34.857 ************************************ 00:09:34.857 END TEST nvmf_target_multipath 00:09:34.857 ************************************ 00:09:34.857 15:43:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:34.857 15:43:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:34.857 15:43:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.857 15:43:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.857 ************************************ 00:09:34.857 START TEST nvmf_zcopy 00:09:34.857 ************************************ 00:09:34.857 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:34.857 * Looking for test storage... 00:09:34.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.857 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:34.857 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:34.857 15:43:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:34.857 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:34.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.858 --rc genhtml_branch_coverage=1 00:09:34.858 --rc genhtml_function_coverage=1 00:09:34.858 --rc genhtml_legend=1 00:09:34.858 --rc geninfo_all_blocks=1 00:09:34.858 --rc geninfo_unexecuted_blocks=1 00:09:34.858 00:09:34.858 ' 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:34.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.858 --rc genhtml_branch_coverage=1 00:09:34.858 --rc genhtml_function_coverage=1 00:09:34.858 --rc genhtml_legend=1 00:09:34.858 --rc geninfo_all_blocks=1 00:09:34.858 --rc geninfo_unexecuted_blocks=1 00:09:34.858 00:09:34.858 ' 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:34.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.858 --rc genhtml_branch_coverage=1 00:09:34.858 --rc genhtml_function_coverage=1 00:09:34.858 --rc genhtml_legend=1 00:09:34.858 --rc geninfo_all_blocks=1 00:09:34.858 --rc geninfo_unexecuted_blocks=1 00:09:34.858 00:09:34.858 ' 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:34.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.858 --rc genhtml_branch_coverage=1 00:09:34.858 --rc genhtml_function_coverage=1 00:09:34.858 --rc genhtml_legend=1 00:09:34.858 --rc geninfo_all_blocks=1 00:09:34.858 --rc geninfo_unexecuted_blocks=1 00:09:34.858 00:09:34.858 ' 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.858 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.117 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.118 15:43:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:41.712 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:41.712 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:41.712 Found net devices under 0000:86:00.0: cvl_0_0 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:41.712 Found net devices under 0000:86:00.1: cvl_0_1 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:41.712 15:43:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.712 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.712 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.712 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:41.712 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:41.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:09:41.712 00:09:41.712 --- 10.0.0.2 ping statistics --- 00:09:41.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.712 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:09:41.712 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:09:41.712 00:09:41.712 --- 10.0.0.1 ping statistics --- 00:09:41.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.712 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:09:41.712 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=2310907 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 2310907 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2310907 ']' 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.713 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.713 [2024-10-01 15:43:51.159121] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:41.713 [2024-10-01 15:43:51.159169] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.713 [2024-10-01 15:43:51.233018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.713 [2024-10-01 15:43:51.310459] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.713 [2024-10-01 15:43:51.310497] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.713 [2024-10-01 15:43:51.310504] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.713 [2024-10-01 15:43:51.310511] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.713 [2024-10-01 15:43:51.310515] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.713 [2024-10-01 15:43:51.310555] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.972 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.972 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:41.972 15:43:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:41.972 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:41.972 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.973 [2024-10-01 15:43:52.042044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.973 [2024-10-01 15:43:52.058227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.973 malloc0 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:41.973 { 00:09:41.973 "params": { 00:09:41.973 "name": "Nvme$subsystem", 00:09:41.973 "trtype": "$TEST_TRANSPORT", 00:09:41.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.973 "adrfam": "ipv4", 00:09:41.973 "trsvcid": "$NVMF_PORT", 00:09:41.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.973 "hdgst": ${hdgst:-false}, 00:09:41.973 "ddgst": ${ddgst:-false} 00:09:41.973 }, 00:09:41.973 "method": "bdev_nvme_attach_controller" 00:09:41.973 } 00:09:41.973 EOF 00:09:41.973 )") 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:09:41.973 15:43:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:41.973 "params": { 00:09:41.973 "name": "Nvme1", 00:09:41.973 "trtype": "tcp", 00:09:41.973 "traddr": "10.0.0.2", 00:09:41.973 "adrfam": "ipv4", 00:09:41.973 "trsvcid": "4420", 00:09:41.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.973 "hdgst": false, 00:09:41.973 "ddgst": false 00:09:41.973 }, 00:09:41.973 "method": "bdev_nvme_attach_controller" 00:09:41.973 }' 00:09:41.973 [2024-10-01 15:43:52.153465] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:41.973 [2024-10-01 15:43:52.153509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311157 ] 00:09:42.232 [2024-10-01 15:43:52.221689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.232 [2024-10-01 15:43:52.294457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.491 Running I/O for 10 seconds... 00:09:52.395 8680.00 IOPS, 67.81 MiB/s 8723.50 IOPS, 68.15 MiB/s 8761.00 IOPS, 68.45 MiB/s 8778.00 IOPS, 68.58 MiB/s 8790.20 IOPS, 68.67 MiB/s 8795.50 IOPS, 68.71 MiB/s 8797.43 IOPS, 68.73 MiB/s 8788.88 IOPS, 68.66 MiB/s 8795.44 IOPS, 68.71 MiB/s 8802.10 IOPS, 68.77 MiB/s 00:09:52.395 Latency(us) 00:09:52.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.395 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:52.395 Verification LBA range: start 0x0 length 0x1000 00:09:52.395 Nvme1n1 : 10.01 8806.68 68.80 0.00 0.00 14493.67 1357.53 22843.98 00:09:52.395 =================================================================================================================== 00:09:52.395 Total : 8806.68 68.80 0.00 0.00 14493.67 1357.53 22843.98 00:09:52.653 15:44:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2312986 00:09:52.653 15:44:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:52.653 15:44:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.653 15:44:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:52.653 15:44:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:52.653 15:44:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:09:52.653 15:44:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:09:52.653 15:44:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:52.653 15:44:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:52.653 { 00:09:52.653 "params": { 00:09:52.653 "name": "Nvme$subsystem", 00:09:52.653 "trtype": "$TEST_TRANSPORT", 00:09:52.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.653 "adrfam": "ipv4", 00:09:52.653 "trsvcid": "$NVMF_PORT", 00:09:52.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.653 "hdgst": ${hdgst:-false}, 00:09:52.653 "ddgst": ${ddgst:-false} 00:09:52.653 }, 00:09:52.653 "method": "bdev_nvme_attach_controller" 00:09:52.653 } 00:09:52.653 EOF 00:09:52.653 )") 00:09:52.653 15:44:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:09:52.653 [2024-10-01 15:44:02.766451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.653 [2024-10-01 15:44:02.766483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.653 15:44:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:09:52.653 15:44:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:09:52.653 15:44:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:52.653 "params": { 00:09:52.653 "name": "Nvme1", 00:09:52.653 "trtype": "tcp", 00:09:52.653 "traddr": "10.0.0.2", 00:09:52.653 "adrfam": "ipv4", 00:09:52.653 "trsvcid": "4420", 00:09:52.653 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.653 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:52.653 "hdgst": false, 00:09:52.653 "ddgst": false 00:09:52.653 }, 00:09:52.653 "method": "bdev_nvme_attach_controller" 00:09:52.653 }' 00:09:52.653 [2024-10-01 15:44:02.778448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.653 [2024-10-01 15:44:02.778461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.653 [2024-10-01 15:44:02.790472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.653 [2024-10-01 15:44:02.790481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.653 [2024-10-01 15:44:02.801774] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:09:52.653 [2024-10-01 15:44:02.801814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312986 ] 00:09:52.653 [2024-10-01 15:44:02.802508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.653 [2024-10-01 15:44:02.802521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.653 [2024-10-01 15:44:02.814539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.653 [2024-10-01 15:44:02.814549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.653 [2024-10-01 15:44:02.826566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.653 [2024-10-01 15:44:02.826575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.653 [2024-10-01 15:44:02.838603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.653 [2024-10-01 15:44:02.838616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:02.850633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:02.850642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:02.862662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:02.862671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:02.867626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.912 [2024-10-01 15:44:02.874698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:02.874709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:02.886726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:02.886737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:02.898760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:02.898769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:02.910799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:02.910819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:02.922825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:02.922835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:02.934852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:02.934861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:02.941764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.912 [2024-10-01 15:44:02.946887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:02.946897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:02.958933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:02.958953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:02.970958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:02.970972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:02.982987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:02.983001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:02.995017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:02.995028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:03.007053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:03.007065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:03.019081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:03.019091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:03.031132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:03.031156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:03.043154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:03.043168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:03.055188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:03.055201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:03.067213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:03.067222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:03.079245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:03.079254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.912 [2024-10-01 15:44:03.091279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.912 [2024-10-01 15:44:03.091290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.170 [2024-10-01 15:44:03.103317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.103330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.115350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.115364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.165113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.165131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.175528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.175540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 Running I/O for 5 seconds... 00:09:53.171 [2024-10-01 15:44:03.192225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.192246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.207948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.207968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.222288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.222307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.235593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.235612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.249311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.249331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.258071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.258089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.272667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.272685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.286823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.286842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.300884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.300906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.312082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.312100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.326423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.326441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.336027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.336049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.345513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.345531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.171 [2024-10-01 15:44:03.359487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.171 [2024-10-01 15:44:03.359505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.373482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.373501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.384188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.384207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.393571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.393589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.403533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.403551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.417657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.417675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.426728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.426746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.441132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.441151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.450192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.450210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.464559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.464577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.478153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.478171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.486784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.486802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.495952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.495970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.510060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.510079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.518872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.518890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.533152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.533170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.541873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.541891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.550656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.550678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.564651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.564671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.578098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.578117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.591974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.591994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.600987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.601006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.610117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.610135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.429 [2024-10-01 15:44:03.619401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.429 [2024-10-01 15:44:03.619421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.634039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.634058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.647503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.647522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.656341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.656360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.666164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.666182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.675439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.675460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.684918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.684938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.698888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.698910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.707576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.707596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.716396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.716415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.726034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.726065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.735352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.735372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.749534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.749554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.762447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.762475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.771220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.771239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.780462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.780483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.789714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.789735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.804372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.804392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.818171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.818191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.826756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.826775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.835958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.835978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.845212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.845231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.859724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.859749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.688 [2024-10-01 15:44:03.873001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.688 [2024-10-01 15:44:03.873021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:03.886999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:03.887019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:03.900582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:03.900602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:03.909397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:03.909416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:03.923636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:03.923656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:03.932396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:03.932416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:03.946613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:03.946633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:03.955228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:03.955248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:03.964561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:03.964580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:03.978717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:03.978737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:03.992167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:03.992186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:04.006071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:04.006091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:04.020263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:04.020283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:04.031348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:04.031368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:04.045508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:04.045528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:04.054452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:04.054472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:04.064452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:04.064472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:04.073800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:04.073819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:04.082883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:04.082901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:04.097197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:04.097215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:04.111351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:04.111370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:04.118764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:04.118784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:04.127591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:04.127610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.947 [2024-10-01 15:44:04.137223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.947 [2024-10-01 15:44:04.137242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.151294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.151313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.160158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.160176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.174267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.174287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 16887.00 IOPS, 131.93 MiB/s [2024-10-01 15:44:04.188288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.188307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.201926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.201946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.215773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.215792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.224610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.224629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.233849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.233876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.242261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.242280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.256580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.256600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.269512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.269532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.283651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.283671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.292403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.292422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.301557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.301576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.310945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.310965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.320249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.320268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.334802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.334822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.343757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.343776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.358193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.358212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.367250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.367269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.381086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.381106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.207 [2024-10-01 15:44:04.389685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.207 [2024-10-01 15:44:04.389704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.465 [2024-10-01 15:44:04.403850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.465 [2024-10-01 15:44:04.403880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.465 [2024-10-01 15:44:04.412500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.465 [2024-10-01 15:44:04.412520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.465 [2024-10-01 15:44:04.421366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.465 [2024-10-01 15:44:04.421385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.465 [2024-10-01 15:44:04.435660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.465 [2024-10-01 15:44:04.435679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.465 [2024-10-01 15:44:04.449057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.465 [2024-10-01 15:44:04.449077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.465 [2024-10-01 15:44:04.462984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.465 [2024-10-01 15:44:04.463004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.465 [2024-10-01 15:44:04.471821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.465 [2024-10-01 15:44:04.471840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.465 [2024-10-01 15:44:04.481011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.465 [2024-10-01 15:44:04.481030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.465 [2024-10-01 15:44:04.494966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.465 [2024-10-01 15:44:04.494985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.465 [2024-10-01 15:44:04.503574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.465 [2024-10-01 15:44:04.503593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.465 [2024-10-01 15:44:04.512468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.465 [2024-10-01 15:44:04.512487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.465 [2024-10-01 15:44:04.521798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.466 [2024-10-01 15:44:04.521817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.466 [2024-10-01 15:44:04.530964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.466 [2024-10-01 15:44:04.530983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.466 [2024-10-01 15:44:04.540580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.466 [2024-10-01 15:44:04.540600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.466 [2024-10-01 15:44:04.555091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.466 [2024-10-01 15:44:04.555110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.466 [2024-10-01 15:44:04.568398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.466 [2024-10-01 15:44:04.568418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.466 [2024-10-01 15:44:04.577092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.466 [2024-10-01 15:44:04.577111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.466 [2024-10-01 15:44:04.586261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.466 [2024-10-01 15:44:04.586280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.466 [2024-10-01 15:44:04.600718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.466 [2024-10-01 15:44:04.600738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.466 [2024-10-01 15:44:04.615158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.466 [2024-10-01 15:44:04.615182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.466 [2024-10-01 15:44:04.630642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.466 [2024-10-01 15:44:04.630660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.466 [2024-10-01 15:44:04.640328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.466 [2024-10-01 15:44:04.640347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.466 [2024-10-01 15:44:04.649602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.466 [2024-10-01 15:44:04.649622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.663915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.663935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.672785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.672804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.682376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.682395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.691633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.691651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.705599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.705618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.718973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.718992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.727585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.727603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.736151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.736170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.745809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.745829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.754421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.754440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.768846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.768872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.781739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.781757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.790978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.790997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.804991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.805011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.818243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.818262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.832443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.832467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.846045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.846064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.854734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.854754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.863997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.864017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.872923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.872942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.887228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.887248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.896231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.896250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.725 [2024-10-01 15:44:04.905193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.725 [2024-10-01 15:44:04.905211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.984 [2024-10-01 15:44:04.919575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.984 [2024-10-01 15:44:04.919595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.984 [2024-10-01 15:44:04.928400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.984 [2024-10-01 15:44:04.928419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.984 [2024-10-01 15:44:04.943202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.984 [2024-10-01 15:44:04.943220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.984 [2024-10-01 15:44:04.952000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.984 [2024-10-01 15:44:04.952019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.984 [2024-10-01 15:44:04.966274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.984 [2024-10-01 15:44:04.966293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.984 [2024-10-01 15:44:04.975525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.984 [2024-10-01 15:44:04.975544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.984 [2024-10-01 15:44:04.984760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.984 [2024-10-01 15:44:04.984778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.984 [2024-10-01 15:44:04.998639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.984 [2024-10-01 15:44:04.998657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.984 [2024-10-01 15:44:05.012081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.984 [2024-10-01 15:44:05.012101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.984 [2024-10-01 15:44:05.025308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.984 [2024-10-01 15:44:05.025327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.984 [2024-10-01 15:44:05.033921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.984 [2024-10-01 15:44:05.033939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.984 [2024-10-01 15:44:05.042428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.984 [2024-10-01 15:44:05.042451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.984 [2024-10-01 15:44:05.056504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.984 [2024-10-01 15:44:05.056523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.985 [2024-10-01 15:44:05.065403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.985 [2024-10-01 15:44:05.065423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.985 [2024-10-01 15:44:05.074829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.985 [2024-10-01 15:44:05.074850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.985 [2024-10-01 15:44:05.084209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.985 [2024-10-01 15:44:05.084228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.985 [2024-10-01 15:44:05.093435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.985 [2024-10-01 15:44:05.093455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.985 [2024-10-01 15:44:05.102852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.985 [2024-10-01 15:44:05.102882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.985 [2024-10-01 15:44:05.111957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.985 [2024-10-01 15:44:05.111978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.985 [2024-10-01 15:44:05.120628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.985 [2024-10-01 15:44:05.120649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.985 [2024-10-01 15:44:05.129868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.985 [2024-10-01 15:44:05.129887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.985 [2024-10-01 15:44:05.139616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.985 [2024-10-01 15:44:05.139636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.985 [2024-10-01 15:44:05.153586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.985 [2024-10-01 15:44:05.153606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.985 [2024-10-01 15:44:05.166831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.985 [2024-10-01 15:44:05.166851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.244 [2024-10-01 15:44:05.180409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.244 [2024-10-01 15:44:05.180439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.244 16968.00 IOPS, 132.56 MiB/s [2024-10-01 15:44:05.194148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.244 [2024-10-01 15:44:05.194167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.244 [2024-10-01 15:44:05.202880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.244 [2024-10-01 15:44:05.202900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.244 [2024-10-01 15:44:05.217483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.244 [2024-10-01 15:44:05.217503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.244 [2024-10-01 15:44:05.226437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.244 [2024-10-01 15:44:05.226457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.244 [2024-10-01 15:44:05.235533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.244 [2024-10-01 15:44:05.235552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.244 [2024-10-01 15:44:05.244692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.244 [2024-10-01 15:44:05.244711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.244 [2024-10-01 15:44:05.253876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.244 [2024-10-01 15:44:05.253894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.244 [2024-10-01 15:44:05.268226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.244 [2024-10-01 15:44:05.268245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.244 [2024-10-01 15:44:05.281667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.244 [2024-10-01 15:44:05.281686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.244 [2024-10-01 15:44:05.295700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.244 [2024-10-01 15:44:05.295720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.244 [2024-10-01 15:44:05.304544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.244 [2024-10-01 15:44:05.304563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.245 [2024-10-01 15:44:05.313874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.245 [2024-10-01 15:44:05.313893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.245 [2024-10-01 15:44:05.328093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.245 [2024-10-01 15:44:05.328114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.245 [2024-10-01 15:44:05.342363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.245 [2024-10-01 15:44:05.342383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.245 [2024-10-01 15:44:05.353770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.245 [2024-10-01 15:44:05.353790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.245 [2024-10-01 15:44:05.362518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.245 [2024-10-01 15:44:05.362537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.245 [2024-10-01 15:44:05.371109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.245 [2024-10-01 15:44:05.371128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.245 [2024-10-01 15:44:05.385792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.245 [2024-10-01 15:44:05.385812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.245 [2024-10-01 15:44:05.394773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.245 [2024-10-01 15:44:05.394793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.245 [2024-10-01 15:44:05.408957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.245 [2024-10-01 15:44:05.408977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.245 [2024-10-01 15:44:05.417787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.245 [2024-10-01 15:44:05.417807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.245 [2024-10-01 15:44:05.426860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.245 [2024-10-01 15:44:05.426890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.503 [2024-10-01 15:44:05.441108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.503 [2024-10-01 15:44:05.441127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.503 [2024-10-01 15:44:05.454547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.503 [2024-10-01 15:44:05.454567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.503 [2024-10-01 15:44:05.468113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.503 [2024-10-01 15:44:05.468131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.503 [2024-10-01 15:44:05.481714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.503 [2024-10-01 15:44:05.481734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.503 [2024-10-01 15:44:05.495485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.503 [2024-10-01 15:44:05.495503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.503 [2024-10-01 15:44:05.509572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.503 [2024-10-01 15:44:05.509592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.503 [2024-10-01 15:44:05.518482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.503 [2024-10-01 15:44:05.518501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.503 [2024-10-01 15:44:05.527501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.503 [2024-10-01 15:44:05.527519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.503 [2024-10-01 15:44:05.536620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.503 [2024-10-01 15:44:05.536638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.545473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.545491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.559689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.559708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.569006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.569025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.578634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.578653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.588113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.588133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.597512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.597531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.611921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.611940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.620756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.620775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.629734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.629753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.639018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.639036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.648031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.648050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.662475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.662494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.676035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.676054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.684819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.684839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.504 [2024-10-01 15:44:05.694131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.504 [2024-10-01 15:44:05.694149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.762 [2024-10-01 15:44:05.708556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.762 [2024-10-01 15:44:05.708575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.762 [2024-10-01 15:44:05.722170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.762 [2024-10-01 15:44:05.722189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.762 [2024-10-01 15:44:05.731146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.762 [2024-10-01 15:44:05.731164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.762 [2024-10-01 15:44:05.740664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.762 [2024-10-01 15:44:05.740683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.762 [2024-10-01 15:44:05.750269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.762 [2024-10-01 15:44:05.750287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.762 [2024-10-01 15:44:05.759285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.762 [2024-10-01 15:44:05.759304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.762 [2024-10-01 15:44:05.773178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.762 [2024-10-01 15:44:05.773198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.762 [2024-10-01 15:44:05.781989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.762 [2024-10-01 15:44:05.782008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.796098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.796117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.805105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.805125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.814163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.814182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.828279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.828298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.841824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.841843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.855803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.855821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.864679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.864699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.873926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.873950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.888324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.888342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.897833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.897852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.906958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.906977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.916076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.916095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.924821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.924840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.939098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.939117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.763 [2024-10-01 15:44:05.951853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.763 [2024-10-01 15:44:05.951879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:05.965849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:05.965874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:05.979195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:05.979214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:05.992849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:05.992876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.006765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.006784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.015500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.015518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.029797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.029817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.037212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.037230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.047450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.047469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.061451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.061470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.070315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.070334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.079389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.079408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.088674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.088698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.097750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.097770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.112273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.112296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.125998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.126018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.139737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.139757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.147284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.147304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.157375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.157394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.171163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.171182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.179997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.180016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 16992.00 IOPS, 132.75 MiB/s [2024-10-01 15:44:06.189163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.189181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.197730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.197749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.022 [2024-10-01 15:44:06.207275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.022 [2024-10-01 15:44:06.207294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.280 [2024-10-01 15:44:06.221616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.280 [2024-10-01 15:44:06.221636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.280 [2024-10-01 15:44:06.234795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.280 [2024-10-01 15:44:06.234815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.280 [2024-10-01 15:44:06.244051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.280 [2024-10-01 15:44:06.244069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.280 [2024-10-01 15:44:06.252553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.280 [2024-10-01 15:44:06.252572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.261463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.261482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.275325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.275345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.289460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.289478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.304677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.304700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.313385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.313404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.321978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.321997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.336572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.336591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.350167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.350187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.358805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.358824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.373098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.373118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.382118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.382136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.396744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.396763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.410001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.410021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.423704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.423724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.432549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.432568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.441885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.441904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.455810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.455831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.281 [2024-10-01 15:44:06.464652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.281 [2024-10-01 15:44:06.464672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.478681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.478702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.492668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.492690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.503216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.503237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.517453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.517488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.530664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.530683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.543818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.543838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.557639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.557659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.566431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.566450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.580640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.580662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.589400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.589420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.603023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.603042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.611807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.611826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.621081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.621100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.635844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.635871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.649828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.649847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.658579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.658599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.668196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.668216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.677411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.677431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.691650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.691668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.700382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.700401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.709366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.709386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.718669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.718689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.540 [2024-10-01 15:44:06.728037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.540 [2024-10-01 15:44:06.728057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.742660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.742679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.751605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.751624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.765665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.765685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.779570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.779590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.788463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.788483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.803247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.803267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.811877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.811912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.821128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.821147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.830174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.830195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.839262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.839281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.853269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.853288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.867294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.867313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.876223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.876242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.884825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.884843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.893955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.893973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.908344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.908363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.921925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.921944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.930984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.931003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.940070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.940089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.948651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.948670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.962716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.962735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.976068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.976087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.800 [2024-10-01 15:44:06.984931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.800 [2024-10-01 15:44:06.984949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:06.999197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:06.999215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.007860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.007884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.022126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.022145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.035396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.035415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.049340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.049359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.058300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.058318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.067167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.067185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.076819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.076837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.086194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.086212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.100109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.100128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.108753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.108771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.118011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.118031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.127810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.127831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.136534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.136554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.145697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.145717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.154767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.154785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.163753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.163772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.178101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.178120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.186857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.186880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 17002.00 IOPS, 132.83 MiB/s [2024-10-01 15:44:07.201113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.201132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.210059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.210078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.224270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.224289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.238547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.238566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.059 [2024-10-01 15:44:07.249734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.059 [2024-10-01 15:44:07.249753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.264412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.264431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.278582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.278601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.289825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.289844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.303930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.303949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.318040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.318059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.325472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.325491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.334317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.334336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.343124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.343143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.357273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.357292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.370658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.370683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.379807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.379827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.394716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.394736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.406167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.406186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.420020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.420040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.429040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.429059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.438450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.438469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.447713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.447732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.457171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.457190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.471838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.471857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.482620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.482640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.491672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.491691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.319 [2024-10-01 15:44:07.500770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.319 [2024-10-01 15:44:07.500790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.515408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.515427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.529157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.529176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.542896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.542915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.552021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.552040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.561898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.561917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.575732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.575751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.589573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.589597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.598414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.598432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.612164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.612182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.620952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.620970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.630514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.630533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.644841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.644861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.658210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.658229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.667424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.667442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.676989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.677007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.686219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.686237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.700698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.700717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.709644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.709663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.718998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.719017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.733362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.733382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.742084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.742103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.755949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.755968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.579 [2024-10-01 15:44:07.765051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.579 [2024-10-01 15:44:07.765071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.838 [2024-10-01 15:44:07.779330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.838 [2024-10-01 15:44:07.779349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.838 [2024-10-01 15:44:07.788792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.838 [2024-10-01 15:44:07.788811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.838 [2024-10-01 15:44:07.802997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.838 [2024-10-01 15:44:07.803021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.838 [2024-10-01 15:44:07.816854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.838 [2024-10-01 15:44:07.816880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.838 [2024-10-01 15:44:07.830252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.838 [2024-10-01 15:44:07.830272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.838 [2024-10-01 15:44:07.839039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.838 [2024-10-01 15:44:07.839060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.838 [2024-10-01 15:44:07.848588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.838 [2024-10-01 15:44:07.848607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.838 [2024-10-01 15:44:07.857297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.838 [2024-10-01 15:44:07.857316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.838 [2024-10-01 15:44:07.871034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.838 [2024-10-01 15:44:07.871056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.838 [2024-10-01 15:44:07.883499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.838 [2024-10-01 15:44:07.883519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.838 [2024-10-01 15:44:07.897508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.838 [2024-10-01 15:44:07.897528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.838 [2024-10-01 15:44:07.906766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.839 [2024-10-01 15:44:07.906785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.839 [2024-10-01 15:44:07.920813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.839 [2024-10-01 15:44:07.920832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.839 [2024-10-01 15:44:07.930165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.839 [2024-10-01 15:44:07.930184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.839 [2024-10-01 15:44:07.939535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.839 [2024-10-01 15:44:07.939554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.839 [2024-10-01 15:44:07.948712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.839 [2024-10-01 15:44:07.948731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.839 [2024-10-01 15:44:07.958034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.839 [2024-10-01 15:44:07.958053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.839 [2024-10-01 15:44:07.967275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.839 [2024-10-01 15:44:07.967295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.839 [2024-10-01 15:44:07.981388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.839 [2024-10-01 15:44:07.981408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.839 [2024-10-01 15:44:07.989955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.839 [2024-10-01 15:44:07.989975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.839 [2024-10-01 15:44:07.999293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.839 [2024-10-01 15:44:07.999312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.839 [2024-10-01 15:44:08.008421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.839 [2024-10-01 15:44:08.008441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.839 [2024-10-01 15:44:08.017368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.839 [2024-10-01 15:44:08.017387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.031716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.031736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.040519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.040537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.049541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.049560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.058117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.058137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.067487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.067506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.081824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.081845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.090456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.090475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.099421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.099440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.108362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.108381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.116925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.116944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.131384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.131404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.139993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.140011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.149178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.149197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.157880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.157900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.167087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.167106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.181853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.181886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.190830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.190849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 17010.40 IOPS, 132.89 MiB/s 00:09:58.098 Latency(us) 00:09:58.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.098 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:58.098 Nvme1n1 : 5.01 17016.16 132.94 0.00 0.00 7516.03 3323.61 15042.07 00:09:58.098 =================================================================================================================== 00:09:58.098 Total : 17016.16 132.94 0.00 0.00 7516.03 3323.61 15042.07 00:09:58.098 [2024-10-01 15:44:08.201301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.201319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.213327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.213343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.225374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.225392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.237396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.098 [2024-10-01 15:44:08.237416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.098 [2024-10-01 15:44:08.249437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.099 [2024-10-01 15:44:08.249458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.099 [2024-10-01 15:44:08.261458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.099 [2024-10-01 15:44:08.261473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.099 [2024-10-01 15:44:08.273490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.099 [2024-10-01 15:44:08.273510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.099 [2024-10-01 15:44:08.285520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.099 [2024-10-01 15:44:08.285536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.361 [2024-10-01 15:44:08.297549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.361 [2024-10-01 15:44:08.297564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.361 [2024-10-01 15:44:08.309582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.361 [2024-10-01 15:44:08.309592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.361 [2024-10-01 15:44:08.321631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.361 [2024-10-01 15:44:08.321643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.361 [2024-10-01 15:44:08.333651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.361 [2024-10-01 15:44:08.333667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.361 [2024-10-01 15:44:08.345681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.361 [2024-10-01 15:44:08.345691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.361 [2024-10-01 15:44:08.365741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.361 [2024-10-01 15:44:08.365759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.361 [2024-10-01 15:44:08.377767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.361 [2024-10-01 15:44:08.377777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2312986) - No such process 00:09:58.361 15:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2312986 00:09:58.361 15:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.361 15:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.361 15:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.361 15:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.361 15:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:58.361 15:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.361 15:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.361 delay0 00:09:58.361 15:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.361 15:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:58.361 15:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.361 15:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.361 15:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.361 15:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:58.361 [2024-10-01 15:44:08.516795] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:04.924 [2024-10-01 15:44:14.689100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2427bd0 is same with the state(6) to be set 00:10:04.924 Initializing NVMe Controllers 00:10:04.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:04.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:04.924 Initialization complete. Launching workers. 00:10:04.924 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 59 00:10:04.924 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 346, failed to submit 33 00:10:04.924 success 147, unsuccessful 199, failed 0 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:04.924 rmmod nvme_tcp 00:10:04.924 rmmod nvme_fabrics 00:10:04.924 rmmod nvme_keyring 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 2310907 ']' 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 2310907 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2310907 ']' 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2310907 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2310907 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2310907' 00:10:04.924 killing process with pid 2310907 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2310907 00:10:04.924 15:44:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2310907 00:10:04.924 15:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:04.924 15:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:04.924 15:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:04.924 15:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:04.924 15:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:10:04.924 15:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:04.924 15:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:10:04.924 15:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.924 15:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:04.924 15:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.924 15:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.924 15:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:07.459 00:10:07.459 real 0m32.216s 00:10:07.459 user 0m43.161s 00:10:07.459 sys 0m11.055s 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.459 ************************************ 00:10:07.459 END TEST nvmf_zcopy 00:10:07.459 ************************************ 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.459 ************************************ 00:10:07.459 START TEST nvmf_nmic 00:10:07.459 ************************************ 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:07.459 * Looking for test storage... 00:10:07.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.459 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:07.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.460 --rc genhtml_branch_coverage=1 00:10:07.460 --rc genhtml_function_coverage=1 00:10:07.460 --rc genhtml_legend=1 00:10:07.460 --rc geninfo_all_blocks=1 00:10:07.460 --rc geninfo_unexecuted_blocks=1 00:10:07.460 00:10:07.460 ' 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:07.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.460 --rc genhtml_branch_coverage=1 00:10:07.460 --rc genhtml_function_coverage=1 00:10:07.460 --rc genhtml_legend=1 00:10:07.460 --rc geninfo_all_blocks=1 00:10:07.460 --rc geninfo_unexecuted_blocks=1 00:10:07.460 00:10:07.460 ' 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:07.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.460 --rc genhtml_branch_coverage=1 00:10:07.460 --rc genhtml_function_coverage=1 00:10:07.460 --rc genhtml_legend=1 00:10:07.460 --rc geninfo_all_blocks=1 00:10:07.460 --rc geninfo_unexecuted_blocks=1 00:10:07.460 00:10:07.460 ' 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:07.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.460 --rc genhtml_branch_coverage=1 00:10:07.460 --rc genhtml_function_coverage=1 00:10:07.460 --rc genhtml_legend=1 00:10:07.460 --rc geninfo_all_blocks=1 00:10:07.460 --rc geninfo_unexecuted_blocks=1 00:10:07.460 00:10:07.460 ' 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:07.460 15:44:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:14.118 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:14.119 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:14.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:14.119 Found net devices under 0000:86:00.0: cvl_0_0 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:14.119 Found net devices under 0000:86:00.1: cvl_0_1 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:14.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:10:14.119 00:10:14.119 --- 10.0.0.2 ping statistics --- 00:10:14.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.119 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:10:14.119 00:10:14.119 --- 10.0.0.1 ping statistics --- 00:10:14.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.119 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=2318469 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 2318469 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2318469 ']' 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.119 15:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.119 [2024-10-01 15:44:23.460810] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:14.119 [2024-10-01 15:44:23.460856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.119 [2024-10-01 15:44:23.532117] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.119 [2024-10-01 15:44:23.611317] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.119 [2024-10-01 15:44:23.611355] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.120 [2024-10-01 15:44:23.611362] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.120 [2024-10-01 15:44:23.611367] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.120 [2024-10-01 15:44:23.611372] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.120 [2024-10-01 15:44:23.611475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.120 [2024-10-01 15:44:23.611602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.120 [2024-10-01 15:44:23.611713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.120 [2024-10-01 15:44:23.611714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.120 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.120 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:14.120 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:14.120 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.120 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.378 [2024-10-01 15:44:24.334244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.378 Malloc0 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.378 [2024-10-01 15:44:24.385914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:14.378 test case1: single bdev can't be used in multiple subsystems 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.378 [2024-10-01 15:44:24.413793] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:14.378 [2024-10-01 15:44:24.413813] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:14.378 [2024-10-01 15:44:24.413820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.378 request: 00:10:14.378 { 00:10:14.378 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:14.378 "namespace": { 00:10:14.378 "bdev_name": "Malloc0", 00:10:14.378 "no_auto_visible": false 00:10:14.378 }, 00:10:14.378 "method": "nvmf_subsystem_add_ns", 00:10:14.378 "req_id": 1 00:10:14.378 } 00:10:14.378 Got JSON-RPC error response 00:10:14.378 response: 00:10:14.378 { 00:10:14.378 "code": -32602, 00:10:14.378 "message": "Invalid parameters" 00:10:14.378 } 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:14.378 Adding namespace failed - expected result. 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:14.378 test case2: host connect to nvmf target in multiple paths 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.378 [2024-10-01 15:44:24.425925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.378 15:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.751 15:44:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:16.682 15:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:16.682 15:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:16.682 15:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:16.682 15:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:16.682 15:44:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:18.578 15:44:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:18.578 15:44:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:18.578 15:44:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:18.578 15:44:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:18.578 15:44:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:18.578 15:44:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:18.578 15:44:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:18.578 [global] 00:10:18.578 thread=1 00:10:18.578 invalidate=1 00:10:18.578 rw=write 00:10:18.578 time_based=1 00:10:18.578 runtime=1 00:10:18.578 ioengine=libaio 00:10:18.578 direct=1 00:10:18.578 bs=4096 00:10:18.578 iodepth=1 00:10:18.578 norandommap=0 00:10:18.578 numjobs=1 00:10:18.578 00:10:18.578 verify_dump=1 00:10:18.578 verify_backlog=512 00:10:18.578 verify_state_save=0 00:10:18.578 do_verify=1 00:10:18.578 verify=crc32c-intel 00:10:18.578 [job0] 00:10:18.578 filename=/dev/nvme0n1 00:10:18.578 Could not set queue depth (nvme0n1) 00:10:18.835 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.835 fio-3.35 00:10:18.835 Starting 1 thread 00:10:20.206 00:10:20.206 job0: (groupid=0, jobs=1): err= 0: pid=2319528: Tue Oct 1 15:44:30 2024 00:10:20.206 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:20.206 slat (nsec): min=7084, max=26440, avg=8155.79, stdev=1167.95 00:10:20.206 clat (usec): min=189, max=794, avg=239.43, stdev=33.19 00:10:20.206 lat (usec): min=197, max=803, avg=247.58, stdev=33.35 00:10:20.206 clat percentiles (usec): 00:10:20.206 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 217], 00:10:20.206 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 247], 00:10:20.206 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 273], 00:10:20.206 | 99.00th=[ 404], 99.50th=[ 429], 99.90th=[ 478], 99.95th=[ 611], 00:10:20.206 | 99.99th=[ 799] 00:10:20.206 write: IOPS=2326, BW=9307KiB/s (9530kB/s)(9316KiB/1001msec); 0 zone resets 00:10:20.206 slat (usec): min=10, max=40651, avg=39.78, stdev=979.49 00:10:20.206 clat (usec): min=112, max=316, avg=165.99, stdev=34.91 00:10:20.206 lat (usec): min=124, max=40911, avg=205.76, stdev=983.40 00:10:20.206 clat percentiles (usec): 00:10:20.206 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 133], 00:10:20.206 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:10:20.206 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 237], 95.00th=[ 241], 00:10:20.206 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 314], 99.95th=[ 318], 00:10:20.206 | 99.99th=[ 318] 00:10:20.206 bw ( KiB/s): min= 8175, max= 8175, per=87.84%, avg=8175.00, stdev= 0.00, samples=1 00:10:20.206 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:20.206 lat (usec) : 250=83.25%, 500=16.70%, 750=0.02%, 1000=0.02% 00:10:20.206 cpu : usr=4.20%, sys=6.40%, ctx=4381, majf=0, minf=1 00:10:20.206 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.206 issued rwts: total=2048,2329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.206 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.206 00:10:20.206 Run status group 0 (all jobs): 00:10:20.206 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:20.206 WRITE: bw=9307KiB/s (9530kB/s), 9307KiB/s-9307KiB/s (9530kB/s-9530kB/s), io=9316KiB (9540kB), run=1001-1001msec 00:10:20.206 00:10:20.206 Disk stats (read/write): 00:10:20.206 nvme0n1: ios=1847/2048, merge=0/0, ticks=1408/324, in_queue=1732, util=100.00% 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:20.206 rmmod nvme_tcp 00:10:20.206 rmmod nvme_fabrics 00:10:20.206 rmmod nvme_keyring 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 2318469 ']' 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 2318469 00:10:20.206 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2318469 ']' 00:10:20.207 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2318469 00:10:20.207 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:20.207 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.207 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2318469 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2318469' 00:10:20.465 killing process with pid 2318469 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2318469 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2318469 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.465 15:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:22.999 00:10:22.999 real 0m15.557s 00:10:22.999 user 0m35.241s 00:10:22.999 sys 0m5.384s 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.999 ************************************ 00:10:22.999 END TEST nvmf_nmic 00:10:22.999 ************************************ 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.999 ************************************ 00:10:22.999 START TEST nvmf_fio_target 00:10:22.999 ************************************ 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:22.999 * Looking for test storage... 00:10:22.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:22.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.999 --rc genhtml_branch_coverage=1 00:10:22.999 --rc genhtml_function_coverage=1 00:10:22.999 --rc genhtml_legend=1 00:10:22.999 --rc geninfo_all_blocks=1 00:10:22.999 --rc geninfo_unexecuted_blocks=1 00:10:22.999 00:10:22.999 ' 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:22.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.999 --rc genhtml_branch_coverage=1 00:10:22.999 --rc genhtml_function_coverage=1 00:10:22.999 --rc genhtml_legend=1 00:10:22.999 --rc geninfo_all_blocks=1 00:10:22.999 --rc geninfo_unexecuted_blocks=1 00:10:22.999 00:10:22.999 ' 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:22.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.999 --rc genhtml_branch_coverage=1 00:10:22.999 --rc genhtml_function_coverage=1 00:10:22.999 --rc genhtml_legend=1 00:10:22.999 --rc geninfo_all_blocks=1 00:10:22.999 --rc geninfo_unexecuted_blocks=1 00:10:22.999 00:10:22.999 ' 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:22.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.999 --rc genhtml_branch_coverage=1 00:10:22.999 --rc genhtml_function_coverage=1 00:10:22.999 --rc genhtml_legend=1 00:10:22.999 --rc geninfo_all_blocks=1 00:10:22.999 --rc geninfo_unexecuted_blocks=1 00:10:22.999 00:10:22.999 ' 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.999 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:23.000 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.000 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.000 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.000 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.000 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.000 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.000 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.000 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.000 15:44:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:23.000 15:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.565 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:29.566 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:29.566 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:29.566 Found net devices under 0000:86:00.0: cvl_0_0 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:29.566 Found net devices under 0000:86:00.1: cvl_0_1 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:10:29.566 00:10:29.566 --- 10.0.0.2 ping statistics --- 00:10:29.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.566 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:10:29.566 00:10:29.566 --- 10.0.0.1 ping statistics --- 00:10:29.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.566 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:29.566 15:44:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:29.566 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:29.566 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:29.566 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:29.566 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.566 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=2323364 00:10:29.566 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:29.566 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 2323364 00:10:29.566 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2323364 ']' 00:10:29.566 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.567 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.567 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.567 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.567 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.567 [2024-10-01 15:44:39.068660] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:29.567 [2024-10-01 15:44:39.068707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.567 [2024-10-01 15:44:39.136985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.567 [2024-10-01 15:44:39.209463] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.567 [2024-10-01 15:44:39.209505] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.567 [2024-10-01 15:44:39.209512] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.567 [2024-10-01 15:44:39.209518] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.567 [2024-10-01 15:44:39.209526] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.567 [2024-10-01 15:44:39.209583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.567 [2024-10-01 15:44:39.209624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.567 [2024-10-01 15:44:39.209709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.567 [2024-10-01 15:44:39.209710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.824 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.824 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:29.824 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:29.824 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:29.824 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.824 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.824 15:44:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:30.083 [2024-10-01 15:44:40.110956] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.083 15:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.341 15:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:30.341 15:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.599 15:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:30.599 15:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.857 15:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:30.857 15:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.857 15:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:30.857 15:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:31.115 15:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.373 15:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:31.373 15:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.631 15:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:31.631 15:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.890 15:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:31.890 15:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:31.890 15:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:32.147 15:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:32.147 15:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:32.405 15:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:32.405 15:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:32.663 15:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.663 [2024-10-01 15:44:42.785906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.663 15:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:32.920 15:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:33.178 15:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:34.551 15:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:34.552 15:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:34.552 15:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.552 15:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:34.552 15:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:34.552 15:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:36.448 15:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:36.448 15:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:36.448 15:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:36.448 15:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:36.448 15:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:36.448 15:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:36.448 15:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:36.448 [global] 00:10:36.448 thread=1 00:10:36.448 invalidate=1 00:10:36.448 rw=write 00:10:36.448 time_based=1 00:10:36.448 runtime=1 00:10:36.448 ioengine=libaio 00:10:36.448 direct=1 00:10:36.448 bs=4096 00:10:36.448 iodepth=1 00:10:36.448 norandommap=0 00:10:36.448 numjobs=1 00:10:36.448 00:10:36.448 verify_dump=1 00:10:36.448 verify_backlog=512 00:10:36.448 verify_state_save=0 00:10:36.448 do_verify=1 00:10:36.448 verify=crc32c-intel 00:10:36.448 [job0] 00:10:36.448 filename=/dev/nvme0n1 00:10:36.448 [job1] 00:10:36.448 filename=/dev/nvme0n2 00:10:36.448 [job2] 00:10:36.448 filename=/dev/nvme0n3 00:10:36.448 [job3] 00:10:36.448 filename=/dev/nvme0n4 00:10:36.448 Could not set queue depth (nvme0n1) 00:10:36.448 Could not set queue depth (nvme0n2) 00:10:36.448 Could not set queue depth (nvme0n3) 00:10:36.448 Could not set queue depth (nvme0n4) 00:10:36.704 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.704 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.704 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.704 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.704 fio-3.35 00:10:36.704 Starting 4 threads 00:10:38.074 00:10:38.074 job0: (groupid=0, jobs=1): err= 0: pid=2324798: Tue Oct 1 15:44:48 2024 00:10:38.074 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:38.074 slat (nsec): min=7139, max=46534, avg=8295.55, stdev=1626.62 00:10:38.074 clat (usec): min=167, max=533, avg=239.09, stdev=40.43 00:10:38.074 lat (usec): min=175, max=541, avg=247.38, stdev=40.46 00:10:38.074 clat percentiles (usec): 00:10:38.075 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 208], 00:10:38.075 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:10:38.075 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:10:38.075 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 515], 99.95th=[ 519], 00:10:38.075 | 99.99th=[ 537] 00:10:38.075 write: IOPS=2372, BW=9491KiB/s (9718kB/s)(9500KiB/1001msec); 0 zone resets 00:10:38.075 slat (usec): min=10, max=40633, avg=37.62, stdev=932.99 00:10:38.075 clat (usec): min=108, max=312, avg=164.65, stdev=42.27 00:10:38.075 lat (usec): min=120, max=40833, avg=202.28, stdev=935.36 00:10:38.075 clat percentiles (usec): 00:10:38.075 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 128], 00:10:38.075 | 30.00th=[ 135], 40.00th=[ 143], 50.00th=[ 151], 60.00th=[ 163], 00:10:38.075 | 70.00th=[ 180], 80.00th=[ 196], 90.00th=[ 239], 95.00th=[ 249], 00:10:38.075 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 306], 99.95th=[ 310], 00:10:38.075 | 99.99th=[ 314] 00:10:38.075 bw ( KiB/s): min= 8192, max= 8192, per=38.76%, avg=8192.00, stdev= 0.00, samples=1 00:10:38.075 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:38.075 lat (usec) : 250=83.65%, 500=16.23%, 750=0.11% 00:10:38.075 cpu : usr=3.90%, sys=6.90%, ctx=4426, majf=0, minf=1 00:10:38.075 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.075 issued rwts: total=2048,2375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.075 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.075 job1: (groupid=0, jobs=1): err= 0: pid=2324799: Tue Oct 1 15:44:48 2024 00:10:38.075 read: IOPS=1007, BW=4031KiB/s (4128kB/s)(4156KiB/1031msec) 00:10:38.075 slat (nsec): min=6567, max=27136, avg=8032.71, stdev=2099.92 00:10:38.075 clat (usec): min=183, max=41971, avg=625.34, stdev=3992.79 00:10:38.075 lat (usec): min=191, max=41994, avg=633.37, stdev=3993.47 00:10:38.075 clat percentiles (usec): 00:10:38.075 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:10:38.075 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:10:38.075 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 326], 00:10:38.075 | 99.00th=[ 494], 99.50th=[40633], 99.90th=[41681], 99.95th=[42206], 00:10:38.075 | 99.99th=[42206] 00:10:38.075 write: IOPS=1489, BW=5959KiB/s (6102kB/s)(6144KiB/1031msec); 0 zone resets 00:10:38.075 slat (usec): min=9, max=40706, avg=50.96, stdev=1160.34 00:10:38.075 clat (usec): min=112, max=330, avg=187.75, stdev=43.61 00:10:38.075 lat (usec): min=122, max=41023, avg=238.71, stdev=1165.41 00:10:38.075 clat percentiles (usec): 00:10:38.075 | 1.00th=[ 123], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 147], 00:10:38.075 | 30.00th=[ 155], 40.00th=[ 165], 50.00th=[ 180], 60.00th=[ 192], 00:10:38.075 | 70.00th=[ 223], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 255], 00:10:38.075 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 318], 99.95th=[ 330], 00:10:38.075 | 99.99th=[ 330] 00:10:38.075 bw ( KiB/s): min= 4096, max= 8192, per=29.07%, avg=6144.00, stdev=2896.31, samples=2 00:10:38.075 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:38.075 lat (usec) : 250=88.08%, 500=11.53% 00:10:38.075 lat (msec) : 50=0.39% 00:10:38.075 cpu : usr=1.65%, sys=2.23%, ctx=2578, majf=0, minf=1 00:10:38.075 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.075 issued rwts: total=1039,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.075 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.075 job2: (groupid=0, jobs=1): err= 0: pid=2324800: Tue Oct 1 15:44:48 2024 00:10:38.075 read: IOPS=20, BW=81.5KiB/s (83.4kB/s)(84.0KiB/1031msec) 00:10:38.075 slat (nsec): min=10157, max=25010, avg=20927.76, stdev=5166.43 00:10:38.075 clat (usec): min=40765, max=41952, avg=41005.84, stdev=232.09 00:10:38.075 lat (usec): min=40775, max=41976, avg=41026.76, stdev=232.97 00:10:38.075 clat percentiles (usec): 00:10:38.075 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:38.075 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:38.075 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:38.075 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:38.075 | 99.99th=[42206] 00:10:38.075 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:10:38.075 slat (usec): min=11, max=41636, avg=139.53, stdev=2063.89 00:10:38.075 clat (usec): min=146, max=365, avg=187.23, stdev=21.20 00:10:38.075 lat (usec): min=160, max=41943, avg=326.76, stdev=2072.35 00:10:38.075 clat percentiles (usec): 00:10:38.075 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:10:38.075 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:10:38.075 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 225], 00:10:38.075 | 99.00th=[ 255], 99.50th=[ 318], 99.90th=[ 367], 99.95th=[ 367], 00:10:38.075 | 99.99th=[ 367] 00:10:38.075 bw ( KiB/s): min= 4096, max= 4096, per=19.38%, avg=4096.00, stdev= 0.00, samples=1 00:10:38.075 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:38.075 lat (usec) : 250=94.75%, 500=1.31% 00:10:38.075 lat (msec) : 50=3.94% 00:10:38.075 cpu : usr=0.58%, sys=0.97%, ctx=536, majf=0, minf=1 00:10:38.075 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.075 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.075 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.075 job3: (groupid=0, jobs=1): err= 0: pid=2324801: Tue Oct 1 15:44:48 2024 00:10:38.075 read: IOPS=522, BW=2092KiB/s (2142kB/s)(2096KiB/1002msec) 00:10:38.075 slat (nsec): min=3103, max=25599, avg=4355.09, stdev=3400.08 00:10:38.075 clat (usec): min=195, max=42034, avg=1432.75, stdev=6878.90 00:10:38.075 lat (usec): min=199, max=42053, avg=1437.11, stdev=6881.81 00:10:38.075 clat percentiles (usec): 00:10:38.075 | 1.00th=[ 215], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 243], 00:10:38.075 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:10:38.075 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:10:38.075 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:38.075 | 99.99th=[42206] 00:10:38.075 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:10:38.075 slat (usec): min=4, max=41565, avg=69.98, stdev=1461.23 00:10:38.075 clat (usec): min=116, max=1240, avg=170.42, stdev=47.21 00:10:38.075 lat (usec): min=121, max=41846, avg=240.40, stdev=1467.55 00:10:38.075 clat percentiles (usec): 00:10:38.075 | 1.00th=[ 124], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 147], 00:10:38.075 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 176], 00:10:38.075 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:10:38.075 | 99.00th=[ 243], 99.50th=[ 367], 99.90th=[ 627], 99.95th=[ 1237], 00:10:38.075 | 99.99th=[ 1237] 00:10:38.075 bw ( KiB/s): min= 72, max= 8120, per=19.38%, avg=4096.00, stdev=5690.80, samples=2 00:10:38.075 iops : min= 18, max= 2030, avg=1024.00, stdev=1422.70, samples=2 00:10:38.075 lat (usec) : 250=80.17%, 500=18.67%, 750=0.13% 00:10:38.075 lat (msec) : 2=0.06%, 50=0.97% 00:10:38.075 cpu : usr=0.60%, sys=0.90%, ctx=1551, majf=0, minf=1 00:10:38.075 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.075 issued rwts: total=524,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.075 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.075 00:10:38.075 Run status group 0 (all jobs): 00:10:38.075 READ: bw=13.8MiB/s (14.4MB/s), 81.5KiB/s-8184KiB/s (83.4kB/s-8380kB/s), io=14.2MiB (14.9MB), run=1001-1031msec 00:10:38.075 WRITE: bw=20.6MiB/s (21.6MB/s), 1986KiB/s-9491KiB/s (2034kB/s-9718kB/s), io=21.3MiB (22.3MB), run=1001-1031msec 00:10:38.075 00:10:38.075 Disk stats (read/write): 00:10:38.075 nvme0n1: ios=1588/1934, merge=0/0, ticks=907/314, in_queue=1221, util=87.07% 00:10:38.075 nvme0n2: ios=1052/1536, merge=0/0, ticks=1263/281, in_queue=1544, util=90.94% 00:10:38.075 nvme0n3: ios=40/512, merge=0/0, ticks=1483/92, in_queue=1575, util=95.11% 00:10:38.075 nvme0n4: ios=579/1024, merge=0/0, ticks=1183/168, in_queue=1351, util=99.56% 00:10:38.075 15:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:38.075 [global] 00:10:38.075 thread=1 00:10:38.075 invalidate=1 00:10:38.075 rw=randwrite 00:10:38.075 time_based=1 00:10:38.075 runtime=1 00:10:38.075 ioengine=libaio 00:10:38.075 direct=1 00:10:38.075 bs=4096 00:10:38.075 iodepth=1 00:10:38.075 norandommap=0 00:10:38.075 numjobs=1 00:10:38.075 00:10:38.075 verify_dump=1 00:10:38.075 verify_backlog=512 00:10:38.075 verify_state_save=0 00:10:38.075 do_verify=1 00:10:38.075 verify=crc32c-intel 00:10:38.075 [job0] 00:10:38.075 filename=/dev/nvme0n1 00:10:38.075 [job1] 00:10:38.075 filename=/dev/nvme0n2 00:10:38.075 [job2] 00:10:38.075 filename=/dev/nvme0n3 00:10:38.075 [job3] 00:10:38.075 filename=/dev/nvme0n4 00:10:38.075 Could not set queue depth (nvme0n1) 00:10:38.075 Could not set queue depth (nvme0n2) 00:10:38.075 Could not set queue depth (nvme0n3) 00:10:38.075 Could not set queue depth (nvme0n4) 00:10:38.332 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.332 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.332 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.332 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.332 fio-3.35 00:10:38.332 Starting 4 threads 00:10:39.705 00:10:39.705 job0: (groupid=0, jobs=1): err= 0: pid=2325173: Tue Oct 1 15:44:49 2024 00:10:39.705 read: IOPS=2119, BW=8480KiB/s (8683kB/s)(8488KiB/1001msec) 00:10:39.705 slat (nsec): min=7027, max=26506, avg=9183.58, stdev=1565.85 00:10:39.705 clat (usec): min=182, max=457, avg=249.49, stdev=35.35 00:10:39.705 lat (usec): min=191, max=464, avg=258.67, stdev=35.42 00:10:39.705 clat percentiles (usec): 00:10:39.705 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 217], 00:10:39.705 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 260], 00:10:39.705 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 306], 00:10:39.705 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 392], 99.95th=[ 404], 00:10:39.705 | 99.99th=[ 457] 00:10:39.705 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:39.705 slat (nsec): min=5030, max=45644, avg=11747.32, stdev=2915.45 00:10:39.705 clat (usec): min=121, max=278, avg=157.22, stdev=16.78 00:10:39.705 lat (usec): min=127, max=284, avg=168.97, stdev=17.21 00:10:39.705 clat percentiles (usec): 00:10:39.705 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:10:39.705 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:10:39.705 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 182], 95.00th=[ 192], 00:10:39.705 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 225], 99.95th=[ 237], 00:10:39.705 | 99.99th=[ 277] 00:10:39.705 bw ( KiB/s): min= 9848, max= 9848, per=41.55%, avg=9848.00, stdev= 0.00, samples=1 00:10:39.705 iops : min= 2462, max= 2462, avg=2462.00, stdev= 0.00, samples=1 00:10:39.705 lat (usec) : 250=79.99%, 500=20.01% 00:10:39.705 cpu : usr=4.10%, sys=7.10%, ctx=4684, majf=0, minf=1 00:10:39.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.705 issued rwts: total=2122,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.705 job1: (groupid=0, jobs=1): err= 0: pid=2325174: Tue Oct 1 15:44:49 2024 00:10:39.705 read: IOPS=21, BW=86.4KiB/s (88.4kB/s)(88.0KiB/1019msec) 00:10:39.705 slat (nsec): min=9874, max=27010, avg=23896.41, stdev=3600.05 00:10:39.705 clat (usec): min=40867, max=41977, avg=41217.19, stdev=422.42 00:10:39.705 lat (usec): min=40894, max=42004, avg=41241.09, stdev=422.76 00:10:39.705 clat percentiles (usec): 00:10:39.705 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:39.705 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:39.705 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:39.705 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:39.705 | 99.99th=[42206] 00:10:39.705 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:10:39.705 slat (nsec): min=10550, max=36402, avg=12218.12, stdev=1816.44 00:10:39.705 clat (usec): min=140, max=961, avg=192.13, stdev=55.67 00:10:39.705 lat (usec): min=151, max=972, avg=204.35, stdev=55.78 00:10:39.705 clat percentiles (usec): 00:10:39.705 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:10:39.705 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:10:39.705 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 215], 95.00th=[ 233], 00:10:39.705 | 99.00th=[ 330], 99.50th=[ 529], 99.90th=[ 963], 99.95th=[ 963], 00:10:39.705 | 99.99th=[ 963] 00:10:39.705 bw ( KiB/s): min= 4096, max= 4096, per=17.28%, avg=4096.00, stdev= 0.00, samples=1 00:10:39.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:39.705 lat (usec) : 250=92.32%, 500=3.00%, 750=0.19%, 1000=0.37% 00:10:39.705 lat (msec) : 50=4.12% 00:10:39.705 cpu : usr=0.59%, sys=0.79%, ctx=536, majf=0, minf=1 00:10:39.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.705 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.705 job2: (groupid=0, jobs=1): err= 0: pid=2325175: Tue Oct 1 15:44:49 2024 00:10:39.705 read: IOPS=2439, BW=9758KiB/s (9992kB/s)(9768KiB/1001msec) 00:10:39.705 slat (nsec): min=6404, max=29047, avg=7360.06, stdev=885.19 00:10:39.705 clat (usec): min=171, max=403, avg=219.97, stdev=21.90 00:10:39.705 lat (usec): min=178, max=410, avg=227.33, stdev=21.92 00:10:39.705 clat percentiles (usec): 00:10:39.705 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:10:39.705 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:10:39.705 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 260], 00:10:39.705 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 367], 99.95th=[ 371], 00:10:39.705 | 99.99th=[ 404] 00:10:39.705 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:39.705 slat (nsec): min=4592, max=39058, avg=10107.36, stdev=1343.68 00:10:39.705 clat (usec): min=109, max=332, avg=159.32, stdev=33.17 00:10:39.705 lat (usec): min=119, max=343, avg=169.43, stdev=33.28 00:10:39.705 clat percentiles (usec): 00:10:39.705 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 137], 00:10:39.705 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 153], 00:10:39.705 | 70.00th=[ 159], 80.00th=[ 172], 90.00th=[ 221], 95.00th=[ 241], 00:10:39.705 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 314], 99.95th=[ 318], 00:10:39.705 | 99.99th=[ 334] 00:10:39.705 bw ( KiB/s): min=10928, max=10928, per=46.11%, avg=10928.00, stdev= 0.00, samples=1 00:10:39.705 iops : min= 2732, max= 2732, avg=2732.00, stdev= 0.00, samples=1 00:10:39.705 lat (usec) : 250=93.88%, 500=6.12% 00:10:39.705 cpu : usr=2.30%, sys=4.70%, ctx=5002, majf=0, minf=2 00:10:39.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.705 issued rwts: total=2442,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.705 job3: (groupid=0, jobs=1): err= 0: pid=2325176: Tue Oct 1 15:44:49 2024 00:10:39.705 read: IOPS=21, BW=84.9KiB/s (86.9kB/s)(88.0KiB/1037msec) 00:10:39.705 slat (nsec): min=10300, max=24205, avg=22920.27, stdev=2851.82 00:10:39.705 clat (usec): min=40790, max=42128, avg=41467.55, stdev=538.03 00:10:39.705 lat (usec): min=40800, max=42152, avg=41490.47, stdev=538.95 00:10:39.705 clat percentiles (usec): 00:10:39.705 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:39.705 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:10:39.705 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:39.705 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:39.705 | 99.99th=[42206] 00:10:39.705 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:10:39.705 slat (nsec): min=9589, max=37191, avg=10904.10, stdev=1731.32 00:10:39.705 clat (usec): min=137, max=874, avg=226.18, stdev=56.02 00:10:39.705 lat (usec): min=148, max=884, avg=237.09, stdev=56.03 00:10:39.705 clat percentiles (usec): 00:10:39.705 | 1.00th=[ 147], 5.00th=[ 161], 10.00th=[ 180], 20.00th=[ 200], 00:10:39.705 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 235], 00:10:39.705 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 265], 00:10:39.705 | 99.00th=[ 359], 99.50th=[ 717], 99.90th=[ 873], 99.95th=[ 873], 00:10:39.705 | 99.99th=[ 873] 00:10:39.705 bw ( KiB/s): min= 4096, max= 4096, per=17.28%, avg=4096.00, stdev= 0.00, samples=1 00:10:39.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:39.705 lat (usec) : 250=85.39%, 500=9.74%, 750=0.56%, 1000=0.19% 00:10:39.705 lat (msec) : 50=4.12% 00:10:39.705 cpu : usr=0.10%, sys=0.68%, ctx=537, majf=0, minf=1 00:10:39.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.705 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.705 00:10:39.705 Run status group 0 (all jobs): 00:10:39.705 READ: bw=17.4MiB/s (18.2MB/s), 84.9KiB/s-9758KiB/s (86.9kB/s-9992kB/s), io=18.0MiB (18.9MB), run=1001-1037msec 00:10:39.705 WRITE: bw=23.1MiB/s (24.3MB/s), 1975KiB/s-9.99MiB/s (2022kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1037msec 00:10:39.705 00:10:39.705 Disk stats (read/write): 00:10:39.705 nvme0n1: ios=1916/2048, merge=0/0, ticks=710/315, in_queue=1025, util=96.99% 00:10:39.705 nvme0n2: ios=68/512, merge=0/0, ticks=887/91, in_queue=978, util=94.92% 00:10:39.705 nvme0n3: ios=2105/2192, merge=0/0, ticks=516/342, in_queue=858, util=91.06% 00:10:39.705 nvme0n4: ios=59/512, merge=0/0, ticks=855/114, in_queue=969, util=100.00% 00:10:39.705 15:44:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:39.705 [global] 00:10:39.705 thread=1 00:10:39.705 invalidate=1 00:10:39.705 rw=write 00:10:39.705 time_based=1 00:10:39.705 runtime=1 00:10:39.705 ioengine=libaio 00:10:39.705 direct=1 00:10:39.705 bs=4096 00:10:39.705 iodepth=128 00:10:39.705 norandommap=0 00:10:39.705 numjobs=1 00:10:39.705 00:10:39.705 verify_dump=1 00:10:39.705 verify_backlog=512 00:10:39.705 verify_state_save=0 00:10:39.705 do_verify=1 00:10:39.705 verify=crc32c-intel 00:10:39.706 [job0] 00:10:39.706 filename=/dev/nvme0n1 00:10:39.706 [job1] 00:10:39.706 filename=/dev/nvme0n2 00:10:39.706 [job2] 00:10:39.706 filename=/dev/nvme0n3 00:10:39.706 [job3] 00:10:39.706 filename=/dev/nvme0n4 00:10:39.706 Could not set queue depth (nvme0n1) 00:10:39.706 Could not set queue depth (nvme0n2) 00:10:39.706 Could not set queue depth (nvme0n3) 00:10:39.706 Could not set queue depth (nvme0n4) 00:10:39.963 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.963 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.963 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.963 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.963 fio-3.35 00:10:39.963 Starting 4 threads 00:10:41.354 00:10:41.354 job0: (groupid=0, jobs=1): err= 0: pid=2325565: Tue Oct 1 15:44:51 2024 00:10:41.354 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:10:41.354 slat (nsec): min=1146, max=21169k, avg=79489.52, stdev=523673.96 00:10:41.354 clat (usec): min=4837, max=35479, avg=10163.05, stdev=3557.47 00:10:41.354 lat (usec): min=4844, max=35497, avg=10242.54, stdev=3595.23 00:10:41.354 clat percentiles (usec): 00:10:41.354 | 1.00th=[ 4883], 5.00th=[ 6915], 10.00th=[ 7635], 20.00th=[ 8356], 00:10:41.354 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:10:41.354 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11338], 95.00th=[13042], 00:10:41.354 | 99.00th=[30540], 99.50th=[32637], 99.90th=[35390], 99.95th=[35390], 00:10:41.354 | 99.99th=[35390] 00:10:41.354 write: IOPS=5820, BW=22.7MiB/s (23.8MB/s)(22.8MiB/1003msec); 0 zone resets 00:10:41.354 slat (nsec): min=1920, max=9112.7k, avg=87321.43, stdev=502076.58 00:10:41.354 clat (usec): min=306, max=51415, avg=11952.97, stdev=8498.42 00:10:41.354 lat (usec): min=3358, max=51419, avg=12040.29, stdev=8541.61 00:10:41.354 clat percentiles (usec): 00:10:41.354 | 1.00th=[ 4883], 5.00th=[ 6849], 10.00th=[ 7635], 20.00th=[ 8160], 00:10:41.354 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10159], 00:10:41.354 | 70.00th=[10290], 80.00th=[10683], 90.00th=[15926], 95.00th=[37487], 00:10:41.354 | 99.00th=[47973], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:10:41.354 | 99.99th=[51643] 00:10:41.354 bw ( KiB/s): min=21104, max=24576, per=32.89%, avg=22840.00, stdev=2455.07, samples=2 00:10:41.354 iops : min= 5276, max= 6144, avg=5710.00, stdev=613.77, samples=2 00:10:41.354 lat (usec) : 500=0.01% 00:10:41.354 lat (msec) : 4=0.40%, 10=49.76%, 20=44.58%, 50=4.84%, 100=0.41% 00:10:41.354 cpu : usr=3.29%, sys=6.29%, ctx=637, majf=0, minf=1 00:10:41.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:41.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.354 issued rwts: total=5632,5838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.354 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.354 job1: (groupid=0, jobs=1): err= 0: pid=2325574: Tue Oct 1 15:44:51 2024 00:10:41.354 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:10:41.354 slat (nsec): min=1507, max=15861k, avg=136355.76, stdev=929930.62 00:10:41.354 clat (usec): min=4759, max=48429, avg=15753.28, stdev=7553.90 00:10:41.354 lat (usec): min=4767, max=48435, avg=15889.64, stdev=7615.06 00:10:41.354 clat percentiles (usec): 00:10:41.354 | 1.00th=[ 6259], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10552], 00:10:41.354 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12387], 60.00th=[15008], 00:10:41.354 | 70.00th=[17171], 80.00th=[19792], 90.00th=[28181], 95.00th=[33817], 00:10:41.354 | 99.00th=[40633], 99.50th=[44827], 99.90th=[48497], 99.95th=[48497], 00:10:41.354 | 99.99th=[48497] 00:10:41.354 write: IOPS=3442, BW=13.4MiB/s (14.1MB/s)(13.6MiB/1008msec); 0 zone resets 00:10:41.354 slat (usec): min=2, max=18395, avg=161.11, stdev=839.42 00:10:41.354 clat (usec): min=2953, max=62495, avg=22902.56, stdev=10761.93 00:10:41.354 lat (usec): min=2962, max=62506, avg=23063.67, stdev=10840.91 00:10:41.354 clat percentiles (usec): 00:10:41.354 | 1.00th=[ 4113], 5.00th=[ 8029], 10.00th=[ 9241], 20.00th=[12387], 00:10:41.354 | 30.00th=[17433], 40.00th=[21365], 50.00th=[21890], 60.00th=[24249], 00:10:41.354 | 70.00th=[26870], 80.00th=[29754], 90.00th=[38536], 95.00th=[43779], 00:10:41.354 | 99.00th=[54789], 99.50th=[58983], 99.90th=[62653], 99.95th=[62653], 00:10:41.354 | 99.99th=[62653] 00:10:41.354 bw ( KiB/s): min=12984, max=13752, per=19.25%, avg=13368.00, stdev=543.06, samples=2 00:10:41.354 iops : min= 3246, max= 3438, avg=3342.00, stdev=135.76, samples=2 00:10:41.354 lat (msec) : 4=0.37%, 10=14.28%, 20=42.77%, 50=41.75%, 100=0.84% 00:10:41.354 cpu : usr=2.88%, sys=4.67%, ctx=353, majf=0, minf=1 00:10:41.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:41.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.354 issued rwts: total=3072,3470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.354 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.354 job2: (groupid=0, jobs=1): err= 0: pid=2325593: Tue Oct 1 15:44:51 2024 00:10:41.354 read: IOPS=5350, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1004msec) 00:10:41.354 slat (nsec): min=1164, max=9700.1k, avg=75250.38, stdev=549902.55 00:10:41.354 clat (usec): min=2287, max=54850, avg=11216.58, stdev=3123.93 00:10:41.354 lat (usec): min=3047, max=61448, avg=11291.84, stdev=3168.72 00:10:41.354 clat percentiles (usec): 00:10:41.354 | 1.00th=[ 5604], 5.00th=[ 7242], 10.00th=[ 8225], 20.00th=[ 9634], 00:10:41.354 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:10:41.354 | 70.00th=[11469], 80.00th=[11863], 90.00th=[13566], 95.00th=[15926], 00:10:41.354 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26870], 99.95th=[53740], 00:10:41.354 | 99.99th=[54789] 00:10:41.354 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:10:41.354 slat (nsec): min=1926, max=35988k, avg=78773.18, stdev=780105.08 00:10:41.354 clat (usec): min=317, max=60537, avg=11783.87, stdev=6525.54 00:10:41.354 lat (usec): min=598, max=60567, avg=11862.65, stdev=6575.98 00:10:41.354 clat percentiles (usec): 00:10:41.354 | 1.00th=[ 2278], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 8455], 00:10:41.354 | 30.00th=[ 8979], 40.00th=[10290], 50.00th=[10814], 60.00th=[11207], 00:10:41.354 | 70.00th=[11469], 80.00th=[11600], 90.00th=[20055], 95.00th=[26608], 00:10:41.354 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:10:41.354 | 99.99th=[60556] 00:10:41.354 bw ( KiB/s): min=20480, max=24576, per=32.44%, avg=22528.00, stdev=2896.31, samples=2 00:10:41.354 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:10:41.354 lat (usec) : 500=0.01%, 750=0.10%, 1000=0.10% 00:10:41.354 lat (msec) : 2=0.18%, 4=1.05%, 10=31.72%, 20=60.70%, 50=6.09% 00:10:41.354 lat (msec) : 100=0.05% 00:10:41.354 cpu : usr=3.79%, sys=5.68%, ctx=419, majf=0, minf=1 00:10:41.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:41.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.354 issued rwts: total=5372,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.354 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.354 job3: (groupid=0, jobs=1): err= 0: pid=2325599: Tue Oct 1 15:44:51 2024 00:10:41.354 read: IOPS=2463, BW=9855KiB/s (10.1MB/s)(9904KiB/1005msec) 00:10:41.354 slat (nsec): min=1328, max=33878k, avg=198571.11, stdev=1524832.09 00:10:41.354 clat (usec): min=2633, max=80314, avg=23435.41, stdev=12036.20 00:10:41.354 lat (usec): min=5010, max=80333, avg=23633.99, stdev=12164.74 00:10:41.354 clat percentiles (usec): 00:10:41.354 | 1.00th=[ 9896], 5.00th=[12518], 10.00th=[13304], 20.00th=[14746], 00:10:41.354 | 30.00th=[15139], 40.00th=[16909], 50.00th=[17957], 60.00th=[20055], 00:10:41.355 | 70.00th=[27395], 80.00th=[31589], 90.00th=[41157], 95.00th=[52167], 00:10:41.355 | 99.00th=[60556], 99.50th=[60556], 99.90th=[60556], 99.95th=[77071], 00:10:41.355 | 99.99th=[80217] 00:10:41.355 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:10:41.355 slat (usec): min=2, max=21039, avg=187.48, stdev=859.57 00:10:41.355 clat (usec): min=1166, max=62559, avg=26989.65, stdev=12710.92 00:10:41.355 lat (usec): min=1179, max=62570, avg=27177.14, stdev=12784.50 00:10:41.355 clat percentiles (usec): 00:10:41.355 | 1.00th=[ 3294], 5.00th=[ 9765], 10.00th=[12911], 20.00th=[20055], 00:10:41.355 | 30.00th=[21627], 40.00th=[22152], 50.00th=[23200], 60.00th=[26608], 00:10:41.355 | 70.00th=[27919], 80.00th=[32900], 90.00th=[47973], 95.00th=[56886], 00:10:41.355 | 99.00th=[61080], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:10:41.355 | 99.99th=[62653] 00:10:41.355 bw ( KiB/s): min= 9400, max=11080, per=14.75%, avg=10240.00, stdev=1187.94, samples=2 00:10:41.355 iops : min= 2350, max= 2770, avg=2560.00, stdev=296.98, samples=2 00:10:41.355 lat (msec) : 2=0.26%, 4=0.50%, 10=2.54%, 20=36.38%, 50=53.12% 00:10:41.355 lat (msec) : 100=7.21% 00:10:41.355 cpu : usr=2.79%, sys=2.89%, ctx=337, majf=0, minf=2 00:10:41.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:10:41.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.355 issued rwts: total=2476,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.355 00:10:41.355 Run status group 0 (all jobs): 00:10:41.355 READ: bw=64.1MiB/s (67.3MB/s), 9855KiB/s-21.9MiB/s (10.1MB/s-23.0MB/s), io=64.7MiB (67.8MB), run=1003-1008msec 00:10:41.355 WRITE: bw=67.8MiB/s (71.1MB/s), 9.95MiB/s-22.7MiB/s (10.4MB/s-23.8MB/s), io=68.4MiB (71.7MB), run=1003-1008msec 00:10:41.355 00:10:41.355 Disk stats (read/write): 00:10:41.355 nvme0n1: ios=4627/4944, merge=0/0, ticks=22947/29705, in_queue=52652, util=99.70% 00:10:41.355 nvme0n2: ios=2560/2879, merge=0/0, ticks=40011/64005, in_queue=104016, util=86.56% 00:10:41.355 nvme0n3: ios=4647/4615, merge=0/0, ticks=35741/41392, in_queue=77133, util=97.59% 00:10:41.355 nvme0n4: ios=2091/2159, merge=0/0, ticks=32867/39907, in_queue=72774, util=97.47% 00:10:41.355 15:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:41.355 [global] 00:10:41.355 thread=1 00:10:41.355 invalidate=1 00:10:41.355 rw=randwrite 00:10:41.355 time_based=1 00:10:41.355 runtime=1 00:10:41.355 ioengine=libaio 00:10:41.355 direct=1 00:10:41.355 bs=4096 00:10:41.355 iodepth=128 00:10:41.355 norandommap=0 00:10:41.355 numjobs=1 00:10:41.355 00:10:41.355 verify_dump=1 00:10:41.355 verify_backlog=512 00:10:41.355 verify_state_save=0 00:10:41.355 do_verify=1 00:10:41.355 verify=crc32c-intel 00:10:41.355 [job0] 00:10:41.355 filename=/dev/nvme0n1 00:10:41.355 [job1] 00:10:41.355 filename=/dev/nvme0n2 00:10:41.355 [job2] 00:10:41.355 filename=/dev/nvme0n3 00:10:41.355 [job3] 00:10:41.355 filename=/dev/nvme0n4 00:10:41.355 Could not set queue depth (nvme0n1) 00:10:41.355 Could not set queue depth (nvme0n2) 00:10:41.355 Could not set queue depth (nvme0n3) 00:10:41.355 Could not set queue depth (nvme0n4) 00:10:41.612 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.612 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.612 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.612 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.612 fio-3.35 00:10:41.612 Starting 4 threads 00:10:42.979 00:10:42.979 job0: (groupid=0, jobs=1): err= 0: pid=2326013: Tue Oct 1 15:44:52 2024 00:10:42.979 read: IOPS=4711, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1005msec) 00:10:42.979 slat (nsec): min=1053, max=15076k, avg=106669.85, stdev=815922.13 00:10:42.979 clat (usec): min=513, max=50595, avg=12760.69, stdev=6210.94 00:10:42.979 lat (usec): min=2902, max=50601, avg=12867.36, stdev=6265.53 00:10:42.979 clat percentiles (usec): 00:10:42.979 | 1.00th=[ 4424], 5.00th=[ 5407], 10.00th=[ 6390], 20.00th=[ 8979], 00:10:42.980 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[11863], 00:10:42.980 | 70.00th=[14615], 80.00th=[17695], 90.00th=[20841], 95.00th=[23462], 00:10:42.980 | 99.00th=[32900], 99.50th=[34866], 99.90th=[50070], 99.95th=[50070], 00:10:42.980 | 99.99th=[50594] 00:10:42.980 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:10:42.980 slat (nsec): min=1689, max=20122k, avg=90978.40, stdev=551428.06 00:10:42.980 clat (usec): min=164, max=43395, avg=13083.60, stdev=6360.19 00:10:42.980 lat (usec): min=416, max=43399, avg=13174.57, stdev=6407.31 00:10:42.980 clat percentiles (usec): 00:10:42.980 | 1.00th=[ 3064], 5.00th=[ 6194], 10.00th=[ 7570], 20.00th=[ 9503], 00:10:42.980 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[11600], 00:10:42.980 | 70.00th=[15533], 80.00th=[16450], 90.00th=[22676], 95.00th=[25560], 00:10:42.980 | 99.00th=[37487], 99.50th=[37487], 99.90th=[40633], 99.95th=[43254], 00:10:42.980 | 99.99th=[43254] 00:10:42.980 bw ( KiB/s): min=14672, max=26280, per=30.83%, avg=20476.00, stdev=8208.10, samples=2 00:10:42.980 iops : min= 3668, max= 6570, avg=5119.00, stdev=2052.02, samples=2 00:10:42.980 lat (usec) : 250=0.01%, 500=0.04%, 750=0.04%, 1000=0.07% 00:10:42.980 lat (msec) : 2=0.02%, 4=1.13%, 10=36.12%, 20=48.75%, 50=13.81% 00:10:42.980 lat (msec) : 100=0.01% 00:10:42.980 cpu : usr=2.89%, sys=5.28%, ctx=594, majf=0, minf=1 00:10:42.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:42.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.980 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.980 job1: (groupid=0, jobs=1): err= 0: pid=2326028: Tue Oct 1 15:44:52 2024 00:10:42.980 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:10:42.980 slat (nsec): min=1395, max=12187k, avg=132117.30, stdev=911753.72 00:10:42.980 clat (usec): min=5966, max=40088, avg=15603.58, stdev=5061.08 00:10:42.980 lat (usec): min=5978, max=40098, avg=15735.69, stdev=5129.53 00:10:42.980 clat percentiles (usec): 00:10:42.980 | 1.00th=[ 8455], 5.00th=[10421], 10.00th=[10814], 20.00th=[11338], 00:10:42.980 | 30.00th=[11863], 40.00th=[12780], 50.00th=[14484], 60.00th=[15926], 00:10:42.980 | 70.00th=[17957], 80.00th=[19792], 90.00th=[20841], 95.00th=[25560], 00:10:42.980 | 99.00th=[31327], 99.50th=[34866], 99.90th=[40109], 99.95th=[40109], 00:10:42.980 | 99.99th=[40109] 00:10:42.980 write: IOPS=3466, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1008msec); 0 zone resets 00:10:42.980 slat (usec): min=2, max=35056, avg=163.57, stdev=970.22 00:10:42.980 clat (usec): min=2868, max=56767, avg=22802.13, stdev=9040.33 00:10:42.980 lat (usec): min=2878, max=56794, avg=22965.70, stdev=9106.87 00:10:42.980 clat percentiles (usec): 00:10:42.980 | 1.00th=[ 5342], 5.00th=[11338], 10.00th=[14877], 20.00th=[15664], 00:10:42.980 | 30.00th=[16319], 40.00th=[18482], 50.00th=[20317], 60.00th=[21627], 00:10:42.980 | 70.00th=[27919], 80.00th=[31851], 90.00th=[36963], 95.00th=[39584], 00:10:42.980 | 99.00th=[42206], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:10:42.980 | 99.99th=[56886] 00:10:42.980 bw ( KiB/s): min=11592, max=15344, per=20.28%, avg=13468.00, stdev=2653.06, samples=2 00:10:42.980 iops : min= 2898, max= 3836, avg=3367.00, stdev=663.27, samples=2 00:10:42.980 lat (msec) : 4=0.37%, 10=3.40%, 20=61.91%, 50=34.31%, 100=0.02% 00:10:42.980 cpu : usr=3.48%, sys=3.87%, ctx=413, majf=0, minf=1 00:10:42.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:42.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.980 issued rwts: total=3072,3494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.980 job2: (groupid=0, jobs=1): err= 0: pid=2326040: Tue Oct 1 15:44:52 2024 00:10:42.980 read: IOPS=3859, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1004msec) 00:10:42.980 slat (nsec): min=1096, max=18473k, avg=129476.77, stdev=789552.39 00:10:42.980 clat (usec): min=3339, max=52028, avg=16366.82, stdev=7914.14 00:10:42.980 lat (usec): min=3344, max=52043, avg=16496.29, stdev=7981.23 00:10:42.980 clat percentiles (usec): 00:10:42.980 | 1.00th=[ 3752], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10552], 00:10:42.980 | 30.00th=[11076], 40.00th=[12780], 50.00th=[15008], 60.00th=[16581], 00:10:42.980 | 70.00th=[17957], 80.00th=[20317], 90.00th=[24773], 95.00th=[34341], 00:10:42.980 | 99.00th=[45351], 99.50th=[50070], 99.90th=[52167], 99.95th=[52167], 00:10:42.980 | 99.99th=[52167] 00:10:42.980 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:10:42.980 slat (nsec): min=1903, max=11803k, avg=114458.00, stdev=673474.22 00:10:42.980 clat (usec): min=1051, max=53188, avg=15561.43, stdev=9552.34 00:10:42.980 lat (usec): min=1061, max=53202, avg=15675.88, stdev=9615.88 00:10:42.980 clat percentiles (usec): 00:10:42.980 | 1.00th=[ 3130], 5.00th=[ 7308], 10.00th=[ 8094], 20.00th=[ 8979], 00:10:42.980 | 30.00th=[ 9503], 40.00th=[11994], 50.00th=[12649], 60.00th=[14746], 00:10:42.980 | 70.00th=[15401], 80.00th=[20055], 90.00th=[27395], 95.00th=[40633], 00:10:42.980 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:10:42.980 | 99.99th=[53216] 00:10:42.980 bw ( KiB/s): min=12288, max=20480, per=24.67%, avg=16384.00, stdev=5792.62, samples=2 00:10:42.980 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:10:42.980 lat (msec) : 2=0.13%, 4=1.33%, 10=24.85%, 20=52.45%, 50=19.88% 00:10:42.980 lat (msec) : 100=1.35% 00:10:42.980 cpu : usr=2.29%, sys=5.08%, ctx=385, majf=0, minf=1 00:10:42.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:42.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.980 issued rwts: total=3875,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.980 job3: (groupid=0, jobs=1): err= 0: pid=2326046: Tue Oct 1 15:44:52 2024 00:10:42.980 read: IOPS=4176, BW=16.3MiB/s (17.1MB/s)(17.0MiB/1043msec) 00:10:42.980 slat (nsec): min=1111, max=36489k, avg=114006.77, stdev=949608.57 00:10:42.980 clat (usec): min=3237, max=75984, avg=15768.83, stdev=11391.64 00:10:42.980 lat (usec): min=3243, max=75999, avg=15882.84, stdev=11440.05 00:10:42.980 clat percentiles (usec): 00:10:42.980 | 1.00th=[ 5735], 5.00th=[ 6325], 10.00th=[ 8356], 20.00th=[10814], 00:10:42.980 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[12518], 00:10:42.980 | 70.00th=[13698], 80.00th=[15926], 90.00th=[37487], 95.00th=[45351], 00:10:42.980 | 99.00th=[59507], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:10:42.980 | 99.99th=[76022] 00:10:42.980 write: IOPS=4418, BW=17.3MiB/s (18.1MB/s)(18.0MiB/1043msec); 0 zone resets 00:10:42.980 slat (nsec): min=1790, max=18305k, avg=95281.12, stdev=590929.39 00:10:42.980 clat (usec): min=471, max=39335, avg=13826.15, stdev=6992.16 00:10:42.980 lat (usec): min=496, max=42545, avg=13921.43, stdev=7046.38 00:10:42.980 clat percentiles (usec): 00:10:42.980 | 1.00th=[ 2343], 5.00th=[ 4178], 10.00th=[ 6128], 20.00th=[ 9896], 00:10:42.980 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12780], 00:10:42.980 | 70.00th=[14484], 80.00th=[16909], 90.00th=[24249], 95.00th=[29754], 00:10:42.980 | 99.00th=[33162], 99.50th=[33424], 99.90th=[34866], 99.95th=[34866], 00:10:42.980 | 99.99th=[39584] 00:10:42.980 bw ( KiB/s): min=15696, max=21168, per=27.75%, avg=18432.00, stdev=3869.29, samples=2 00:10:42.980 iops : min= 3924, max= 5292, avg=4608.00, stdev=967.32, samples=2 00:10:42.980 lat (usec) : 500=0.01%, 750=0.07% 00:10:42.980 lat (msec) : 2=0.16%, 4=1.87%, 10=14.71%, 20=68.82%, 50=12.68% 00:10:42.980 lat (msec) : 100=1.67% 00:10:42.980 cpu : usr=2.97%, sys=4.51%, ctx=452, majf=0, minf=1 00:10:42.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:42.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.980 issued rwts: total=4356,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.980 00:10:42.980 Run status group 0 (all jobs): 00:10:42.980 READ: bw=60.1MiB/s (63.0MB/s), 11.9MiB/s-18.4MiB/s (12.5MB/s-19.3MB/s), io=62.6MiB (65.7MB), run=1004-1043msec 00:10:42.980 WRITE: bw=64.9MiB/s (68.0MB/s), 13.5MiB/s-19.9MiB/s (14.2MB/s-20.9MB/s), io=67.6MiB (70.9MB), run=1004-1043msec 00:10:42.980 00:10:42.980 Disk stats (read/write): 00:10:42.980 nvme0n1: ios=3810/4096, merge=0/0, ticks=45309/48206, in_queue=93515, util=86.47% 00:10:42.980 nvme0n2: ios=2583/3031, merge=0/0, ticks=39542/64838, in_queue=104380, util=93.49% 00:10:42.980 nvme0n3: ios=3423/3584, merge=0/0, ticks=22313/23032, in_queue=45345, util=97.91% 00:10:42.980 nvme0n4: ios=3641/3679, merge=0/0, ticks=30291/30961, in_queue=61252, util=93.36% 00:10:42.980 15:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:42.980 15:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2326170 00:10:42.980 15:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:42.980 15:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:42.980 [global] 00:10:42.980 thread=1 00:10:42.980 invalidate=1 00:10:42.980 rw=read 00:10:42.980 time_based=1 00:10:42.980 runtime=10 00:10:42.980 ioengine=libaio 00:10:42.980 direct=1 00:10:42.980 bs=4096 00:10:42.980 iodepth=1 00:10:42.980 norandommap=1 00:10:42.980 numjobs=1 00:10:42.980 00:10:42.980 [job0] 00:10:42.980 filename=/dev/nvme0n1 00:10:42.980 [job1] 00:10:42.980 filename=/dev/nvme0n2 00:10:42.980 [job2] 00:10:42.980 filename=/dev/nvme0n3 00:10:42.980 [job3] 00:10:42.980 filename=/dev/nvme0n4 00:10:42.980 Could not set queue depth (nvme0n1) 00:10:42.980 Could not set queue depth (nvme0n2) 00:10:42.980 Could not set queue depth (nvme0n3) 00:10:42.980 Could not set queue depth (nvme0n4) 00:10:43.238 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.238 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.238 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.238 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.238 fio-3.35 00:10:43.238 Starting 4 threads 00:10:45.762 15:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:46.019 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46567424, buflen=4096 00:10:46.019 fio: pid=2326517, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:46.019 15:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:46.276 15:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.276 15:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:46.276 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=23941120, buflen=4096 00:10:46.276 fio: pid=2326516, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:46.533 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=42913792, buflen=4096 00:10:46.533 fio: pid=2326502, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:46.533 15:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.533 15:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:46.533 15:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.533 15:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:46.533 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=503808, buflen=4096 00:10:46.533 fio: pid=2326515, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:46.792 00:10:46.792 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2326502: Tue Oct 1 15:44:56 2024 00:10:46.792 read: IOPS=3376, BW=13.2MiB/s (13.8MB/s)(40.9MiB/3103msec) 00:10:46.792 slat (usec): min=5, max=22620, avg= 9.88, stdev=230.40 00:10:46.792 clat (usec): min=144, max=42030, avg=283.11, stdev=1801.29 00:10:46.792 lat (usec): min=151, max=42053, avg=292.99, stdev=1816.64 00:10:46.792 clat percentiles (usec): 00:10:46.792 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 182], 00:10:46.792 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 210], 00:10:46.792 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 239], 00:10:46.792 | 99.00th=[ 281], 99.50th=[ 330], 99.90th=[41157], 99.95th=[41681], 00:10:46.792 | 99.99th=[42206] 00:10:46.792 bw ( KiB/s): min= 104, max=19855, per=40.21%, avg=13433.17, stdev=8501.41, samples=6 00:10:46.792 iops : min= 26, max= 4963, avg=3358.17, stdev=2125.24, samples=6 00:10:46.792 lat (usec) : 250=97.43%, 500=2.28%, 750=0.08% 00:10:46.792 lat (msec) : 20=0.01%, 50=0.19% 00:10:46.792 cpu : usr=0.71%, sys=3.09%, ctx=10480, majf=0, minf=1 00:10:46.792 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.792 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.792 issued rwts: total=10478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.792 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.792 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2326515: Tue Oct 1 15:44:56 2024 00:10:46.792 read: IOPS=37, BW=148KiB/s (151kB/s)(492KiB/3330msec) 00:10:46.792 slat (usec): min=8, max=3791, avg=44.76, stdev=339.31 00:10:46.792 clat (usec): min=214, max=48060, avg=26848.95, stdev=19560.97 00:10:46.792 lat (usec): min=230, max=48073, avg=26893.86, stdev=19588.23 00:10:46.792 clat percentiles (usec): 00:10:46.792 | 1.00th=[ 221], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 265], 00:10:46.792 | 30.00th=[ 306], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:10:46.792 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:46.792 | 99.00th=[42206], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:10:46.792 | 99.99th=[47973] 00:10:46.792 bw ( KiB/s): min= 96, max= 252, per=0.45%, avg=151.33, stdev=64.72, samples=6 00:10:46.792 iops : min= 24, max= 63, avg=37.83, stdev=16.18, samples=6 00:10:46.792 lat (usec) : 250=12.10%, 500=21.77% 00:10:46.792 lat (msec) : 2=0.81%, 50=64.52% 00:10:46.792 cpu : usr=0.00%, sys=0.12%, ctx=126, majf=0, minf=1 00:10:46.792 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.792 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.792 issued rwts: total=124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.792 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.792 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2326516: Tue Oct 1 15:44:56 2024 00:10:46.792 read: IOPS=1995, BW=7982KiB/s (8174kB/s)(22.8MiB/2929msec) 00:10:46.792 slat (usec): min=5, max=8688, avg= 8.83, stdev=113.55 00:10:46.792 clat (usec): min=167, max=42358, avg=487.65, stdev=3311.88 00:10:46.792 lat (usec): min=174, max=50839, avg=496.48, stdev=3333.05 00:10:46.792 clat percentiles (usec): 00:10:46.792 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:10:46.792 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:10:46.792 | 70.00th=[ 221], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 285], 00:10:46.792 | 99.00th=[ 416], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:46.792 | 99.99th=[42206] 00:10:46.792 bw ( KiB/s): min= 224, max=19048, per=25.62%, avg=8561.60, stdev=9145.45, samples=5 00:10:46.792 iops : min= 56, max= 4762, avg=2140.40, stdev=2286.36, samples=5 00:10:46.792 lat (usec) : 250=77.13%, 500=22.02%, 750=0.17% 00:10:46.792 lat (msec) : 4=0.02%, 50=0.65% 00:10:46.792 cpu : usr=0.51%, sys=1.81%, ctx=5847, majf=0, minf=2 00:10:46.792 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.792 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.792 issued rwts: total=5846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.792 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.792 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2326517: Tue Oct 1 15:44:56 2024 00:10:46.792 read: IOPS=4220, BW=16.5MiB/s (17.3MB/s)(44.4MiB/2694msec) 00:10:46.792 slat (nsec): min=4039, max=31621, avg=7139.25, stdev=1000.14 00:10:46.792 clat (usec): min=169, max=19447, avg=226.46, stdev=182.26 00:10:46.792 lat (usec): min=176, max=19466, avg=233.60, stdev=182.37 00:10:46.792 clat percentiles (usec): 00:10:46.792 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:10:46.792 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 223], 60.00th=[ 231], 00:10:46.792 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 269], 00:10:46.792 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 351], 99.95th=[ 404], 00:10:46.792 | 99.99th=[ 502] 00:10:46.792 bw ( KiB/s): min=15504, max=18520, per=51.66%, avg=17260.80, stdev=1180.79, samples=5 00:10:46.792 iops : min= 3876, max= 4630, avg=4315.20, stdev=295.20, samples=5 00:10:46.792 lat (usec) : 250=79.89%, 500=20.09%, 750=0.01% 00:10:46.792 lat (msec) : 20=0.01% 00:10:46.792 cpu : usr=1.26%, sys=3.53%, ctx=11371, majf=0, minf=2 00:10:46.792 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.792 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.792 issued rwts: total=11370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.792 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.792 00:10:46.792 Run status group 0 (all jobs): 00:10:46.792 READ: bw=32.6MiB/s (34.2MB/s), 148KiB/s-16.5MiB/s (151kB/s-17.3MB/s), io=109MiB (114MB), run=2694-3330msec 00:10:46.792 00:10:46.792 Disk stats (read/write): 00:10:46.792 nvme0n1: ios=10342/0, merge=0/0, ticks=2886/0, in_queue=2886, util=93.37% 00:10:46.792 nvme0n2: ios=122/0, merge=0/0, ticks=3261/0, in_queue=3261, util=95.24% 00:10:46.792 nvme0n3: ios=5842/0, merge=0/0, ticks=2697/0, in_queue=2697, util=95.89% 00:10:46.792 nvme0n4: ios=10998/0, merge=0/0, ticks=2420/0, in_queue=2420, util=96.38% 00:10:46.792 15:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.792 15:44:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:47.050 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:47.050 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:47.308 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:47.308 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:47.566 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:47.566 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:47.566 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:47.566 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2326170 00:10:47.566 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:47.566 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.823 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.823 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:47.823 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:47.823 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.823 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:47.823 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.823 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:47.823 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:47.823 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:47.823 nvmf hotplug test: fio failed as expected 00:10:47.823 15:44:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:48.081 rmmod nvme_tcp 00:10:48.081 rmmod nvme_fabrics 00:10:48.081 rmmod nvme_keyring 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 2323364 ']' 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 2323364 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2323364 ']' 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2323364 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2323364 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2323364' 00:10:48.081 killing process with pid 2323364 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2323364 00:10:48.081 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2323364 00:10:48.340 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:48.340 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:48.340 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:48.340 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:48.340 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:10:48.340 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:48.340 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:10:48.340 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:48.340 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:48.340 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.340 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.340 15:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:50.875 00:10:50.875 real 0m27.648s 00:10:50.875 user 1m48.613s 00:10:50.875 sys 0m9.110s 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.875 ************************************ 00:10:50.875 END TEST nvmf_fio_target 00:10:50.875 ************************************ 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:50.875 ************************************ 00:10:50.875 START TEST nvmf_bdevio 00:10:50.875 ************************************ 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:50.875 * Looking for test storage... 00:10:50.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:50.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.875 --rc genhtml_branch_coverage=1 00:10:50.875 --rc genhtml_function_coverage=1 00:10:50.875 --rc genhtml_legend=1 00:10:50.875 --rc geninfo_all_blocks=1 00:10:50.875 --rc geninfo_unexecuted_blocks=1 00:10:50.875 00:10:50.875 ' 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:50.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.875 --rc genhtml_branch_coverage=1 00:10:50.875 --rc genhtml_function_coverage=1 00:10:50.875 --rc genhtml_legend=1 00:10:50.875 --rc geninfo_all_blocks=1 00:10:50.875 --rc geninfo_unexecuted_blocks=1 00:10:50.875 00:10:50.875 ' 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:50.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.875 --rc genhtml_branch_coverage=1 00:10:50.875 --rc genhtml_function_coverage=1 00:10:50.875 --rc genhtml_legend=1 00:10:50.875 --rc geninfo_all_blocks=1 00:10:50.875 --rc geninfo_unexecuted_blocks=1 00:10:50.875 00:10:50.875 ' 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:50.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.875 --rc genhtml_branch_coverage=1 00:10:50.875 --rc genhtml_function_coverage=1 00:10:50.875 --rc genhtml_legend=1 00:10:50.875 --rc geninfo_all_blocks=1 00:10:50.875 --rc geninfo_unexecuted_blocks=1 00:10:50.875 00:10:50.875 ' 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.875 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:50.876 15:45:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:57.441 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:57.441 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:57.441 Found net devices under 0000:86:00.0: cvl_0_0 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:57.441 Found net devices under 0000:86:00.1: cvl_0_1 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.441 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:10:57.441 00:10:57.442 --- 10.0.0.2 ping statistics --- 00:10:57.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.442 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:10:57.442 00:10:57.442 --- 10.0.0.1 ping statistics --- 00:10:57.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.442 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=2330897 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 2330897 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2330897 ']' 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.442 15:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.442 [2024-10-01 15:45:06.746400] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:57.442 [2024-10-01 15:45:06.746448] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.442 [2024-10-01 15:45:06.819210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.442 [2024-10-01 15:45:06.891781] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.442 [2024-10-01 15:45:06.891823] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.442 [2024-10-01 15:45:06.891829] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.442 [2024-10-01 15:45:06.891835] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.442 [2024-10-01 15:45:06.891840] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.442 [2024-10-01 15:45:06.891954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:57.442 [2024-10-01 15:45:06.892060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:57.442 [2024-10-01 15:45:06.892148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.442 [2024-10-01 15:45:06.892149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:57.442 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:57.442 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:57.442 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:57.442 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:57.442 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.442 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.442 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:57.442 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.442 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.442 [2024-10-01 15:45:07.622575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.442 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.442 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:57.442 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.442 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.699 Malloc0 00:10:57.699 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.699 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:57.699 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.699 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.699 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.699 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:57.699 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.699 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.699 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.699 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.699 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.699 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.699 [2024-10-01 15:45:07.673760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.699 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.699 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:57.700 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:57.700 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:10:57.700 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:10:57.700 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:57.700 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:57.700 { 00:10:57.700 "params": { 00:10:57.700 "name": "Nvme$subsystem", 00:10:57.700 "trtype": "$TEST_TRANSPORT", 00:10:57.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:57.700 "adrfam": "ipv4", 00:10:57.700 "trsvcid": "$NVMF_PORT", 00:10:57.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:57.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:57.700 "hdgst": ${hdgst:-false}, 00:10:57.700 "ddgst": ${ddgst:-false} 00:10:57.700 }, 00:10:57.700 "method": "bdev_nvme_attach_controller" 00:10:57.700 } 00:10:57.700 EOF 00:10:57.700 )") 00:10:57.700 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:10:57.700 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:10:57.700 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:10:57.700 15:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:57.700 "params": { 00:10:57.700 "name": "Nvme1", 00:10:57.700 "trtype": "tcp", 00:10:57.700 "traddr": "10.0.0.2", 00:10:57.700 "adrfam": "ipv4", 00:10:57.700 "trsvcid": "4420", 00:10:57.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:57.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:57.700 "hdgst": false, 00:10:57.700 "ddgst": false 00:10:57.700 }, 00:10:57.700 "method": "bdev_nvme_attach_controller" 00:10:57.700 }' 00:10:57.700 [2024-10-01 15:45:07.726745] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:10:57.700 [2024-10-01 15:45:07.726789] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2331143 ] 00:10:57.700 [2024-10-01 15:45:07.795037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:57.700 [2024-10-01 15:45:07.870803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.700 [2024-10-01 15:45:07.870912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.700 [2024-10-01 15:45:07.870913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.957 I/O targets: 00:10:57.957 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:57.957 00:10:57.957 00:10:57.957 CUnit - A unit testing framework for C - Version 2.1-3 00:10:57.957 http://cunit.sourceforge.net/ 00:10:57.957 00:10:57.957 00:10:57.957 Suite: bdevio tests on: Nvme1n1 00:10:57.957 Test: blockdev write read block ...passed 00:10:58.214 Test: blockdev write zeroes read block ...passed 00:10:58.214 Test: blockdev write zeroes read no split ...passed 00:10:58.214 Test: blockdev write zeroes read split ...passed 00:10:58.214 Test: blockdev write zeroes read split partial ...passed 00:10:58.214 Test: blockdev reset ...[2024-10-01 15:45:08.185468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:58.214 [2024-10-01 15:45:08.185529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16773d0 (9): Bad file descriptor 00:10:58.214 [2024-10-01 15:45:08.281725] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:58.214 passed 00:10:58.214 Test: blockdev write read 8 blocks ...passed 00:10:58.214 Test: blockdev write read size > 128k ...passed 00:10:58.214 Test: blockdev write read invalid size ...passed 00:10:58.214 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:58.214 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:58.214 Test: blockdev write read max offset ...passed 00:10:58.472 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:58.472 Test: blockdev writev readv 8 blocks ...passed 00:10:58.472 Test: blockdev writev readv 30 x 1block ...passed 00:10:58.472 Test: blockdev writev readv block ...passed 00:10:58.472 Test: blockdev writev readv size > 128k ...passed 00:10:58.472 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:58.472 Test: blockdev comparev and writev ...[2024-10-01 15:45:08.537810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.472 [2024-10-01 15:45:08.537839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:58.472 [2024-10-01 15:45:08.537853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.472 [2024-10-01 15:45:08.537861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:58.472 [2024-10-01 15:45:08.538095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.472 [2024-10-01 15:45:08.538111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:58.472 [2024-10-01 15:45:08.538123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.472 [2024-10-01 15:45:08.538130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:58.472 [2024-10-01 15:45:08.538356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.472 [2024-10-01 15:45:08.538367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:58.472 [2024-10-01 15:45:08.538378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.472 [2024-10-01 15:45:08.538386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:58.472 [2024-10-01 15:45:08.538607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.472 [2024-10-01 15:45:08.538619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:58.472 [2024-10-01 15:45:08.538631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.472 [2024-10-01 15:45:08.538639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:58.472 passed 00:10:58.472 Test: blockdev nvme passthru rw ...passed 00:10:58.472 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:45:08.620226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:58.472 [2024-10-01 15:45:08.620243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:58.472 [2024-10-01 15:45:08.620355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:58.472 [2024-10-01 15:45:08.620365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:58.472 [2024-10-01 15:45:08.620480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:58.472 [2024-10-01 15:45:08.620490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:58.472 [2024-10-01 15:45:08.620609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:58.473 [2024-10-01 15:45:08.620619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:58.473 passed 00:10:58.473 Test: blockdev nvme admin passthru ...passed 00:10:58.731 Test: blockdev copy ...passed 00:10:58.731 00:10:58.731 Run Summary: Type Total Ran Passed Failed Inactive 00:10:58.731 suites 1 1 n/a 0 0 00:10:58.731 tests 23 23 23 0 0 00:10:58.731 asserts 152 152 152 0 n/a 00:10:58.731 00:10:58.731 Elapsed time = 1.215 seconds 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:58.731 rmmod nvme_tcp 00:10:58.731 rmmod nvme_fabrics 00:10:58.731 rmmod nvme_keyring 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 2330897 ']' 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 2330897 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2330897 ']' 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2330897 00:10:58.731 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:58.996 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:58.996 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2330897 00:10:58.996 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:58.996 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:58.996 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2330897' 00:10:58.996 killing process with pid 2330897 00:10:58.996 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2330897 00:10:58.996 15:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2330897 00:10:58.996 15:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:58.996 15:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:58.996 15:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:58.996 15:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:58.996 15:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:10:58.996 15:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:58.996 15:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:10:59.256 15:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.256 15:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:59.256 15:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.256 15:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.256 15:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.159 15:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:01.159 00:11:01.159 real 0m10.741s 00:11:01.159 user 0m13.121s 00:11:01.159 sys 0m5.032s 00:11:01.159 15:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.159 15:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.159 ************************************ 00:11:01.159 END TEST nvmf_bdevio 00:11:01.159 ************************************ 00:11:01.159 15:45:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:01.159 00:11:01.159 real 4m46.093s 00:11:01.159 user 10m54.409s 00:11:01.159 sys 1m40.774s 00:11:01.159 15:45:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.159 15:45:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:01.159 ************************************ 00:11:01.159 END TEST nvmf_target_core 00:11:01.159 ************************************ 00:11:01.159 15:45:11 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:01.159 15:45:11 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:01.159 15:45:11 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.159 15:45:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:01.418 ************************************ 00:11:01.418 START TEST nvmf_target_extra 00:11:01.418 ************************************ 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:01.418 * Looking for test storage... 00:11:01.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:01.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.418 --rc genhtml_branch_coverage=1 00:11:01.418 --rc genhtml_function_coverage=1 00:11:01.418 --rc genhtml_legend=1 00:11:01.418 --rc geninfo_all_blocks=1 00:11:01.418 --rc geninfo_unexecuted_blocks=1 00:11:01.418 00:11:01.418 ' 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:01.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.418 --rc genhtml_branch_coverage=1 00:11:01.418 --rc genhtml_function_coverage=1 00:11:01.418 --rc genhtml_legend=1 00:11:01.418 --rc geninfo_all_blocks=1 00:11:01.418 --rc geninfo_unexecuted_blocks=1 00:11:01.418 00:11:01.418 ' 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:01.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.418 --rc genhtml_branch_coverage=1 00:11:01.418 --rc genhtml_function_coverage=1 00:11:01.418 --rc genhtml_legend=1 00:11:01.418 --rc geninfo_all_blocks=1 00:11:01.418 --rc geninfo_unexecuted_blocks=1 00:11:01.418 00:11:01.418 ' 00:11:01.418 15:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:01.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.418 --rc genhtml_branch_coverage=1 00:11:01.418 --rc genhtml_function_coverage=1 00:11:01.419 --rc genhtml_legend=1 00:11:01.419 --rc geninfo_all_blocks=1 00:11:01.419 --rc geninfo_unexecuted_blocks=1 00:11:01.419 00:11:01.419 ' 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:01.419 ************************************ 00:11:01.419 START TEST nvmf_example 00:11:01.419 ************************************ 00:11:01.419 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:01.679 * Looking for test storage... 00:11:01.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:01.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.679 --rc genhtml_branch_coverage=1 00:11:01.679 --rc genhtml_function_coverage=1 00:11:01.679 --rc genhtml_legend=1 00:11:01.679 --rc geninfo_all_blocks=1 00:11:01.679 --rc geninfo_unexecuted_blocks=1 00:11:01.679 00:11:01.679 ' 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:01.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.679 --rc genhtml_branch_coverage=1 00:11:01.679 --rc genhtml_function_coverage=1 00:11:01.679 --rc genhtml_legend=1 00:11:01.679 --rc geninfo_all_blocks=1 00:11:01.679 --rc geninfo_unexecuted_blocks=1 00:11:01.679 00:11:01.679 ' 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:01.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.679 --rc genhtml_branch_coverage=1 00:11:01.679 --rc genhtml_function_coverage=1 00:11:01.679 --rc genhtml_legend=1 00:11:01.679 --rc geninfo_all_blocks=1 00:11:01.679 --rc geninfo_unexecuted_blocks=1 00:11:01.679 00:11:01.679 ' 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:01.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.679 --rc genhtml_branch_coverage=1 00:11:01.679 --rc genhtml_function_coverage=1 00:11:01.679 --rc genhtml_legend=1 00:11:01.679 --rc geninfo_all_blocks=1 00:11:01.679 --rc geninfo_unexecuted_blocks=1 00:11:01.679 00:11:01.679 ' 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.679 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:01.680 15:45:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:08.249 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:08.249 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:08.249 Found net devices under 0000:86:00.0: cvl_0_0 00:11:08.249 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:08.250 Found net devices under 0000:86:00.1: cvl_0_1 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:08.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:11:08.250 00:11:08.250 --- 10.0.0.2 ping statistics --- 00:11:08.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.250 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:11:08.250 00:11:08.250 --- 10.0.0.1 ping statistics --- 00:11:08.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.250 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2335350 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2335350 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2335350 ']' 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.250 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.817 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.817 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:08.817 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:08.817 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:08.817 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.817 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:08.817 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.817 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.817 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:08.818 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:18.791 Initializing NVMe Controllers 00:11:18.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:18.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:18.791 Initialization complete. Launching workers. 00:11:18.791 ======================================================== 00:11:18.791 Latency(us) 00:11:18.791 Device Information : IOPS MiB/s Average min max 00:11:18.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18402.88 71.89 3477.12 497.64 15480.48 00:11:18.791 ======================================================== 00:11:18.791 Total : 18402.88 71.89 3477.12 497.64 15480.48 00:11:18.791 00:11:18.791 15:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:18.791 15:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:18.791 15:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:18.791 15:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:18.791 15:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.791 15:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:18.791 15:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.791 15:45:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.791 rmmod nvme_tcp 00:11:19.100 rmmod nvme_fabrics 00:11:19.100 rmmod nvme_keyring 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 2335350 ']' 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 2335350 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2335350 ']' 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2335350 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2335350 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2335350' 00:11:19.100 killing process with pid 2335350 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2335350 00:11:19.100 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2335350 00:11:19.100 nvmf threads initialize successfully 00:11:19.100 bdev subsystem init successfully 00:11:19.100 created a nvmf target service 00:11:19.100 create targets's poll groups done 00:11:19.100 all subsystems of target started 00:11:19.100 nvmf target is running 00:11:19.100 all subsystems of target stopped 00:11:19.100 destroy targets's poll groups done 00:11:19.101 destroyed the nvmf target service 00:11:19.101 bdev subsystem finish successfully 00:11:19.101 nvmf threads destroy successfully 00:11:19.459 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:19.459 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:19.459 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:19.459 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:19.459 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:11:19.459 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:19.459 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:11:19.459 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.459 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:19.459 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.459 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.459 15:45:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.386 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:21.386 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:21.386 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:21.386 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.386 00:11:21.386 real 0m19.763s 00:11:21.386 user 0m45.714s 00:11:21.386 sys 0m6.165s 00:11:21.386 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.386 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.386 ************************************ 00:11:21.386 END TEST nvmf_example 00:11:21.386 ************************************ 00:11:21.386 15:45:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:21.386 15:45:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:21.386 15:45:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.386 15:45:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:21.386 ************************************ 00:11:21.386 START TEST nvmf_filesystem 00:11:21.386 ************************************ 00:11:21.386 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:21.386 * Looking for test storage... 00:11:21.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.386 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:21.386 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:21.386 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:21.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.648 --rc genhtml_branch_coverage=1 00:11:21.648 --rc genhtml_function_coverage=1 00:11:21.648 --rc genhtml_legend=1 00:11:21.648 --rc geninfo_all_blocks=1 00:11:21.648 --rc geninfo_unexecuted_blocks=1 00:11:21.648 00:11:21.648 ' 00:11:21.648 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:21.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.648 --rc genhtml_branch_coverage=1 00:11:21.648 --rc genhtml_function_coverage=1 00:11:21.648 --rc genhtml_legend=1 00:11:21.648 --rc geninfo_all_blocks=1 00:11:21.648 --rc geninfo_unexecuted_blocks=1 00:11:21.648 00:11:21.648 ' 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:21.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.649 --rc genhtml_branch_coverage=1 00:11:21.649 --rc genhtml_function_coverage=1 00:11:21.649 --rc genhtml_legend=1 00:11:21.649 --rc geninfo_all_blocks=1 00:11:21.649 --rc geninfo_unexecuted_blocks=1 00:11:21.649 00:11:21.649 ' 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:21.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.649 --rc genhtml_branch_coverage=1 00:11:21.649 --rc genhtml_function_coverage=1 00:11:21.649 --rc genhtml_legend=1 00:11:21.649 --rc geninfo_all_blocks=1 00:11:21.649 --rc geninfo_unexecuted_blocks=1 00:11:21.649 00:11:21.649 ' 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:21.649 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:21.650 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:21.650 #define SPDK_CONFIG_H 00:11:21.650 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:21.650 #define SPDK_CONFIG_APPS 1 00:11:21.650 #define SPDK_CONFIG_ARCH native 00:11:21.650 #undef SPDK_CONFIG_ASAN 00:11:21.650 #undef SPDK_CONFIG_AVAHI 00:11:21.650 #undef SPDK_CONFIG_CET 00:11:21.650 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:21.650 #define SPDK_CONFIG_COVERAGE 1 00:11:21.650 #define SPDK_CONFIG_CROSS_PREFIX 00:11:21.650 #undef SPDK_CONFIG_CRYPTO 00:11:21.650 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:21.650 #undef SPDK_CONFIG_CUSTOMOCF 00:11:21.650 #undef SPDK_CONFIG_DAOS 00:11:21.650 #define SPDK_CONFIG_DAOS_DIR 00:11:21.650 #define SPDK_CONFIG_DEBUG 1 00:11:21.650 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:21.650 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:21.650 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:21.650 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:21.650 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:21.650 #undef SPDK_CONFIG_DPDK_UADK 00:11:21.650 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:21.650 #define SPDK_CONFIG_EXAMPLES 1 00:11:21.650 #undef SPDK_CONFIG_FC 00:11:21.650 #define SPDK_CONFIG_FC_PATH 00:11:21.650 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:21.650 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:21.650 #define SPDK_CONFIG_FSDEV 1 00:11:21.650 #undef SPDK_CONFIG_FUSE 00:11:21.650 #undef SPDK_CONFIG_FUZZER 00:11:21.650 #define SPDK_CONFIG_FUZZER_LIB 00:11:21.650 #undef SPDK_CONFIG_GOLANG 00:11:21.650 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:21.650 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:21.650 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:21.650 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:21.650 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:21.650 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:21.650 #undef SPDK_CONFIG_HAVE_LZ4 00:11:21.650 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:21.650 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:21.651 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:21.651 #define SPDK_CONFIG_IDXD 1 00:11:21.651 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:21.651 #undef SPDK_CONFIG_IPSEC_MB 00:11:21.651 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:21.651 #define SPDK_CONFIG_ISAL 1 00:11:21.651 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:21.651 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:21.651 #define SPDK_CONFIG_LIBDIR 00:11:21.651 #undef SPDK_CONFIG_LTO 00:11:21.651 #define SPDK_CONFIG_MAX_LCORES 128 00:11:21.651 #define SPDK_CONFIG_NVME_CUSE 1 00:11:21.651 #undef SPDK_CONFIG_OCF 00:11:21.651 #define SPDK_CONFIG_OCF_PATH 00:11:21.651 #define SPDK_CONFIG_OPENSSL_PATH 00:11:21.651 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:21.651 #define SPDK_CONFIG_PGO_DIR 00:11:21.651 #undef SPDK_CONFIG_PGO_USE 00:11:21.651 #define SPDK_CONFIG_PREFIX /usr/local 00:11:21.651 #undef SPDK_CONFIG_RAID5F 00:11:21.651 #undef SPDK_CONFIG_RBD 00:11:21.651 #define SPDK_CONFIG_RDMA 1 00:11:21.651 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:21.651 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:21.651 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:21.651 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:21.651 #define SPDK_CONFIG_SHARED 1 00:11:21.651 #undef SPDK_CONFIG_SMA 00:11:21.651 #define SPDK_CONFIG_TESTS 1 00:11:21.651 #undef SPDK_CONFIG_TSAN 00:11:21.651 #define SPDK_CONFIG_UBLK 1 00:11:21.651 #define SPDK_CONFIG_UBSAN 1 00:11:21.651 #undef SPDK_CONFIG_UNIT_TESTS 00:11:21.651 #undef SPDK_CONFIG_URING 00:11:21.651 #define SPDK_CONFIG_URING_PATH 00:11:21.651 #undef SPDK_CONFIG_URING_ZNS 00:11:21.651 #undef SPDK_CONFIG_USDT 00:11:21.651 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:21.651 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:21.651 #define SPDK_CONFIG_VFIO_USER 1 00:11:21.651 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:21.651 #define SPDK_CONFIG_VHOST 1 00:11:21.651 #define SPDK_CONFIG_VIRTIO 1 00:11:21.651 #undef SPDK_CONFIG_VTUNE 00:11:21.651 #define SPDK_CONFIG_VTUNE_DIR 00:11:21.651 #define SPDK_CONFIG_WERROR 1 00:11:21.651 #define SPDK_CONFIG_WPDK_DIR 00:11:21.651 #undef SPDK_CONFIG_XNVME 00:11:21.651 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:21.651 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:21.652 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:21.653 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2337760 ]] 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2337760 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:21.654 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.qXLLWZ 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.qXLLWZ/tests/target /tmp/spdk.qXLLWZ 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=677449728 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4606980096 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=190023532544 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963953152 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5940420608 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97971945472 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981976576 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169748992 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192793088 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23044096 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97981452288 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981976576 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=524288 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596382208 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596394496 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:21.655 * Looking for test storage... 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=190023532544 00:11:21.655 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8155013120 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:21.656 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:21.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.915 --rc genhtml_branch_coverage=1 00:11:21.915 --rc genhtml_function_coverage=1 00:11:21.915 --rc genhtml_legend=1 00:11:21.915 --rc geninfo_all_blocks=1 00:11:21.915 --rc geninfo_unexecuted_blocks=1 00:11:21.915 00:11:21.915 ' 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:21.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.915 --rc genhtml_branch_coverage=1 00:11:21.915 --rc genhtml_function_coverage=1 00:11:21.915 --rc genhtml_legend=1 00:11:21.915 --rc geninfo_all_blocks=1 00:11:21.915 --rc geninfo_unexecuted_blocks=1 00:11:21.915 00:11:21.915 ' 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:21.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.915 --rc genhtml_branch_coverage=1 00:11:21.915 --rc genhtml_function_coverage=1 00:11:21.915 --rc genhtml_legend=1 00:11:21.915 --rc geninfo_all_blocks=1 00:11:21.915 --rc geninfo_unexecuted_blocks=1 00:11:21.915 00:11:21.915 ' 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:21.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.915 --rc genhtml_branch_coverage=1 00:11:21.915 --rc genhtml_function_coverage=1 00:11:21.915 --rc genhtml_legend=1 00:11:21.915 --rc geninfo_all_blocks=1 00:11:21.915 --rc geninfo_unexecuted_blocks=1 00:11:21.915 00:11:21.915 ' 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.915 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:21.916 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:28.485 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:28.485 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:28.485 Found net devices under 0000:86:00.0: cvl_0_0 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:28.485 Found net devices under 0000:86:00.1: cvl_0_1 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.485 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:28.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:11:28.486 00:11:28.486 --- 10.0.0.2 ping statistics --- 00:11:28.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.486 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:11:28.486 00:11:28.486 --- 10.0.0.1 ping statistics --- 00:11:28.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.486 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.486 ************************************ 00:11:28.486 START TEST nvmf_filesystem_no_in_capsule 00:11:28.486 ************************************ 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=2340804 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 2340804 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2340804 ']' 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.486 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.486 [2024-10-01 15:45:38.007047] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:28.486 [2024-10-01 15:45:38.007086] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.486 [2024-10-01 15:45:38.078634] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.486 [2024-10-01 15:45:38.157927] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.486 [2024-10-01 15:45:38.157969] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.486 [2024-10-01 15:45:38.157976] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.486 [2024-10-01 15:45:38.157983] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.486 [2024-10-01 15:45:38.157988] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.486 [2024-10-01 15:45:38.158045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.486 [2024-10-01 15:45:38.158084] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.486 [2024-10-01 15:45:38.158189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.486 [2024-10-01 15:45:38.158190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.744 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.744 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:28.744 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:28.744 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:28.744 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.744 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.744 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:28.744 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:28.744 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.744 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.744 [2024-10-01 15:45:38.889766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.744 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.744 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:28.744 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.744 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.004 Malloc1 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.004 [2024-10-01 15:45:39.045355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.004 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:29.004 { 00:11:29.004 "name": "Malloc1", 00:11:29.005 "aliases": [ 00:11:29.005 "6777a28e-d0c5-4e1f-a902-ee26ac612a76" 00:11:29.005 ], 00:11:29.005 "product_name": "Malloc disk", 00:11:29.005 "block_size": 512, 00:11:29.005 "num_blocks": 1048576, 00:11:29.005 "uuid": "6777a28e-d0c5-4e1f-a902-ee26ac612a76", 00:11:29.005 "assigned_rate_limits": { 00:11:29.005 "rw_ios_per_sec": 0, 00:11:29.005 "rw_mbytes_per_sec": 0, 00:11:29.005 "r_mbytes_per_sec": 0, 00:11:29.005 "w_mbytes_per_sec": 0 00:11:29.005 }, 00:11:29.005 "claimed": true, 00:11:29.005 "claim_type": "exclusive_write", 00:11:29.005 "zoned": false, 00:11:29.005 "supported_io_types": { 00:11:29.005 "read": true, 00:11:29.005 "write": true, 00:11:29.005 "unmap": true, 00:11:29.005 "flush": true, 00:11:29.005 "reset": true, 00:11:29.005 "nvme_admin": false, 00:11:29.005 "nvme_io": false, 00:11:29.005 "nvme_io_md": false, 00:11:29.005 "write_zeroes": true, 00:11:29.005 "zcopy": true, 00:11:29.005 "get_zone_info": false, 00:11:29.005 "zone_management": false, 00:11:29.005 "zone_append": false, 00:11:29.005 "compare": false, 00:11:29.005 "compare_and_write": false, 00:11:29.005 "abort": true, 00:11:29.005 "seek_hole": false, 00:11:29.005 "seek_data": false, 00:11:29.005 "copy": true, 00:11:29.005 "nvme_iov_md": false 00:11:29.005 }, 00:11:29.005 "memory_domains": [ 00:11:29.005 { 00:11:29.005 "dma_device_id": "system", 00:11:29.005 "dma_device_type": 1 00:11:29.005 }, 00:11:29.005 { 00:11:29.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.005 "dma_device_type": 2 00:11:29.005 } 00:11:29.005 ], 00:11:29.005 "driver_specific": {} 00:11:29.005 } 00:11:29.005 ]' 00:11:29.005 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:29.005 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:29.005 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:29.005 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:29.005 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:29.005 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:29.005 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:29.005 15:45:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.381 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.381 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:30.381 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.381 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:30.381 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:32.284 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:32.284 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:32.284 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.284 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:32.285 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.285 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:32.285 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:32.285 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:32.285 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:32.285 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:32.285 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:32.285 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:32.285 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:32.285 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:32.285 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:32.285 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:32.285 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:32.285 15:45:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:33.224 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.162 ************************************ 00:11:34.162 START TEST filesystem_ext4 00:11:34.162 ************************************ 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:34.162 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:34.162 mke2fs 1.47.0 (5-Feb-2023) 00:11:34.162 Discarding device blocks: 0/522240 done 00:11:34.421 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:34.421 Filesystem UUID: 2277ed01-970b-41a3-be6a-78d6dcbed3dd 00:11:34.421 Superblock backups stored on blocks: 00:11:34.421 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:34.421 00:11:34.421 Allocating group tables: 0/64 done 00:11:34.421 Writing inode tables: 0/64 done 00:11:34.680 Creating journal (8192 blocks): done 00:11:34.680 Writing superblocks and filesystem accounting information: 0/64 done 00:11:34.680 00:11:34.680 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:34.680 15:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:39.954 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2340804 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:39.954 00:11:39.954 real 0m5.907s 00:11:39.954 user 0m0.014s 00:11:39.954 sys 0m0.085s 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:39.954 ************************************ 00:11:39.954 END TEST filesystem_ext4 00:11:39.954 ************************************ 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.954 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.213 ************************************ 00:11:40.213 START TEST filesystem_btrfs 00:11:40.213 ************************************ 00:11:40.213 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:40.213 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:40.213 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:40.213 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:40.213 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:40.213 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:40.213 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:40.213 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:40.213 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:40.213 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:40.213 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:40.472 btrfs-progs v6.8.1 00:11:40.472 See https://btrfs.readthedocs.io for more information. 00:11:40.472 00:11:40.472 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:40.472 NOTE: several default settings have changed in version 5.15, please make sure 00:11:40.472 this does not affect your deployments: 00:11:40.472 - DUP for metadata (-m dup) 00:11:40.472 - enabled no-holes (-O no-holes) 00:11:40.472 - enabled free-space-tree (-R free-space-tree) 00:11:40.472 00:11:40.472 Label: (null) 00:11:40.472 UUID: bb8fcc18-dfb5-4861-b7d8-a5c53998cd0e 00:11:40.472 Node size: 16384 00:11:40.472 Sector size: 4096 (CPU page size: 4096) 00:11:40.472 Filesystem size: 510.00MiB 00:11:40.472 Block group profiles: 00:11:40.472 Data: single 8.00MiB 00:11:40.472 Metadata: DUP 32.00MiB 00:11:40.473 System: DUP 8.00MiB 00:11:40.473 SSD detected: yes 00:11:40.473 Zoned device: no 00:11:40.473 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:40.473 Checksum: crc32c 00:11:40.473 Number of devices: 1 00:11:40.473 Devices: 00:11:40.473 ID SIZE PATH 00:11:40.473 1 510.00MiB /dev/nvme0n1p1 00:11:40.473 00:11:40.473 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:40.473 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2340804 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:40.732 00:11:40.732 real 0m0.574s 00:11:40.732 user 0m0.023s 00:11:40.732 sys 0m0.117s 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:40.732 ************************************ 00:11:40.732 END TEST filesystem_btrfs 00:11:40.732 ************************************ 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.732 ************************************ 00:11:40.732 START TEST filesystem_xfs 00:11:40.732 ************************************ 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:40.732 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:40.732 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:40.732 = sectsz=512 attr=2, projid32bit=1 00:11:40.732 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:40.732 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:40.732 data = bsize=4096 blocks=130560, imaxpct=25 00:11:40.733 = sunit=0 swidth=0 blks 00:11:40.733 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:40.733 log =internal log bsize=4096 blocks=16384, version=2 00:11:40.733 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:40.733 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:41.670 Discarding blocks...Done. 00:11:41.671 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:41.671 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:44.211 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:44.211 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:44.211 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:44.211 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:44.211 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:44.211 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:44.211 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2340804 00:11:44.211 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:44.211 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:44.211 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:44.211 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:44.211 00:11:44.211 real 0m3.383s 00:11:44.211 user 0m0.021s 00:11:44.211 sys 0m0.078s 00:11:44.211 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.211 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:44.211 ************************************ 00:11:44.211 END TEST filesystem_xfs 00:11:44.211 ************************************ 00:11:44.211 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2340804 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2340804 ']' 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2340804 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2340804 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2340804' 00:11:44.471 killing process with pid 2340804 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2340804 00:11:44.471 15:45:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2340804 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:45.039 00:11:45.039 real 0m17.053s 00:11:45.039 user 1m7.064s 00:11:45.039 sys 0m1.436s 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.039 ************************************ 00:11:45.039 END TEST nvmf_filesystem_no_in_capsule 00:11:45.039 ************************************ 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.039 ************************************ 00:11:45.039 START TEST nvmf_filesystem_in_capsule 00:11:45.039 ************************************ 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=2343803 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 2343803 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2343803 ']' 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:45.039 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.039 [2024-10-01 15:45:55.135178] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:11:45.039 [2024-10-01 15:45:55.135221] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.039 [2024-10-01 15:45:55.207692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.299 [2024-10-01 15:45:55.288234] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.299 [2024-10-01 15:45:55.288272] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.299 [2024-10-01 15:45:55.288279] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.299 [2024-10-01 15:45:55.288285] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.299 [2024-10-01 15:45:55.288290] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.299 [2024-10-01 15:45:55.288348] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.299 [2024-10-01 15:45:55.288452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.299 [2024-10-01 15:45:55.288550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.299 [2024-10-01 15:45:55.288550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:45.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:45.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:45.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:45.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.866 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.866 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:45.866 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:45.867 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.867 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.867 [2024-10-01 15:45:56.014742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.867 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.867 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:45.867 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.867 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.126 Malloc1 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.126 [2024-10-01 15:45:56.158771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:46.126 { 00:11:46.126 "name": "Malloc1", 00:11:46.126 "aliases": [ 00:11:46.126 "650d00ba-0400-478c-9d6d-3e8c99e744f3" 00:11:46.126 ], 00:11:46.126 "product_name": "Malloc disk", 00:11:46.126 "block_size": 512, 00:11:46.126 "num_blocks": 1048576, 00:11:46.126 "uuid": "650d00ba-0400-478c-9d6d-3e8c99e744f3", 00:11:46.126 "assigned_rate_limits": { 00:11:46.126 "rw_ios_per_sec": 0, 00:11:46.126 "rw_mbytes_per_sec": 0, 00:11:46.126 "r_mbytes_per_sec": 0, 00:11:46.126 "w_mbytes_per_sec": 0 00:11:46.126 }, 00:11:46.126 "claimed": true, 00:11:46.126 "claim_type": "exclusive_write", 00:11:46.126 "zoned": false, 00:11:46.126 "supported_io_types": { 00:11:46.126 "read": true, 00:11:46.126 "write": true, 00:11:46.126 "unmap": true, 00:11:46.126 "flush": true, 00:11:46.126 "reset": true, 00:11:46.126 "nvme_admin": false, 00:11:46.126 "nvme_io": false, 00:11:46.126 "nvme_io_md": false, 00:11:46.126 "write_zeroes": true, 00:11:46.126 "zcopy": true, 00:11:46.126 "get_zone_info": false, 00:11:46.126 "zone_management": false, 00:11:46.126 "zone_append": false, 00:11:46.126 "compare": false, 00:11:46.126 "compare_and_write": false, 00:11:46.126 "abort": true, 00:11:46.126 "seek_hole": false, 00:11:46.126 "seek_data": false, 00:11:46.126 "copy": true, 00:11:46.126 "nvme_iov_md": false 00:11:46.126 }, 00:11:46.126 "memory_domains": [ 00:11:46.126 { 00:11:46.126 "dma_device_id": "system", 00:11:46.126 "dma_device_type": 1 00:11:46.126 }, 00:11:46.126 { 00:11:46.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.126 "dma_device_type": 2 00:11:46.126 } 00:11:46.126 ], 00:11:46.126 "driver_specific": {} 00:11:46.126 } 00:11:46.126 ]' 00:11:46.126 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:46.127 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:46.127 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:46.127 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:46.127 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:46.127 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:46.127 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:46.127 15:45:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.504 15:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.504 15:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:47.504 15:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.504 15:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:47.504 15:45:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:49.409 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:49.668 15:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.606 ************************************ 00:11:50.606 START TEST filesystem_in_capsule_ext4 00:11:50.606 ************************************ 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:50.606 15:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:50.606 mke2fs 1.47.0 (5-Feb-2023) 00:11:50.865 Discarding device blocks: 0/522240 done 00:11:50.865 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:50.865 Filesystem UUID: d11eedff-67eb-4b93-86b6-9dda2b2b9980 00:11:50.865 Superblock backups stored on blocks: 00:11:50.865 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:50.865 00:11:50.865 Allocating group tables: 0/64 done 00:11:50.865 Writing inode tables: 0/64 done 00:11:50.865 Creating journal (8192 blocks): done 00:11:53.177 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:11:53.177 00:11:53.177 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:53.177 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2343803 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:58.446 00:11:58.446 real 0m7.592s 00:11:58.446 user 0m0.032s 00:11:58.446 sys 0m0.068s 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:58.446 ************************************ 00:11:58.446 END TEST filesystem_in_capsule_ext4 00:11:58.446 ************************************ 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.446 ************************************ 00:11:58.446 START TEST filesystem_in_capsule_btrfs 00:11:58.446 ************************************ 00:11:58.446 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:58.447 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:58.447 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:58.447 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:58.447 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:58.447 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:58.447 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:58.447 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:58.447 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:58.447 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:58.447 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:58.723 btrfs-progs v6.8.1 00:11:58.723 See https://btrfs.readthedocs.io for more information. 00:11:58.723 00:11:58.723 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:58.723 NOTE: several default settings have changed in version 5.15, please make sure 00:11:58.723 this does not affect your deployments: 00:11:58.723 - DUP for metadata (-m dup) 00:11:58.723 - enabled no-holes (-O no-holes) 00:11:58.723 - enabled free-space-tree (-R free-space-tree) 00:11:58.723 00:11:58.723 Label: (null) 00:11:58.723 UUID: 24e9837e-fa04-476b-af2e-78babb5a2dd7 00:11:58.723 Node size: 16384 00:11:58.723 Sector size: 4096 (CPU page size: 4096) 00:11:58.723 Filesystem size: 510.00MiB 00:11:58.723 Block group profiles: 00:11:58.723 Data: single 8.00MiB 00:11:58.723 Metadata: DUP 32.00MiB 00:11:58.723 System: DUP 8.00MiB 00:11:58.723 SSD detected: yes 00:11:58.723 Zoned device: no 00:11:58.723 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:58.723 Checksum: crc32c 00:11:58.723 Number of devices: 1 00:11:58.723 Devices: 00:11:58.723 ID SIZE PATH 00:11:58.723 1 510.00MiB /dev/nvme0n1p1 00:11:58.723 00:11:58.723 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:58.723 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:58.981 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:58.981 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:58.981 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:58.981 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:58.981 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:58.981 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:58.981 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2343803 00:11:58.981 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:58.981 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:58.981 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:58.981 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:58.981 00:11:58.981 real 0m0.583s 00:11:58.981 user 0m0.026s 00:11:58.981 sys 0m0.114s 00:11:58.981 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.981 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:58.981 ************************************ 00:11:58.981 END TEST filesystem_in_capsule_btrfs 00:11:58.981 ************************************ 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.981 ************************************ 00:11:58.981 START TEST filesystem_in_capsule_xfs 00:11:58.981 ************************************ 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:58.981 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:58.981 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:58.981 = sectsz=512 attr=2, projid32bit=1 00:11:58.981 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:58.981 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:58.981 data = bsize=4096 blocks=130560, imaxpct=25 00:11:58.981 = sunit=0 swidth=0 blks 00:11:58.981 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:58.981 log =internal log bsize=4096 blocks=16384, version=2 00:11:58.981 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:58.981 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:59.913 Discarding blocks...Done. 00:11:59.913 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:59.913 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2343803 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:02.494 00:12:02.494 real 0m3.279s 00:12:02.494 user 0m0.031s 00:12:02.494 sys 0m0.064s 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:02.494 ************************************ 00:12:02.494 END TEST filesystem_in_capsule_xfs 00:12:02.494 ************************************ 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:02.494 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2343803 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2343803 ']' 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2343803 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2343803 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2343803' 00:12:02.752 killing process with pid 2343803 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2343803 00:12:02.752 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2343803 00:12:03.011 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:03.011 00:12:03.011 real 0m18.086s 00:12:03.011 user 1m11.171s 00:12:03.011 sys 0m1.433s 00:12:03.011 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.011 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.011 ************************************ 00:12:03.011 END TEST nvmf_filesystem_in_capsule 00:12:03.011 ************************************ 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:03.343 rmmod nvme_tcp 00:12:03.343 rmmod nvme_fabrics 00:12:03.343 rmmod nvme_keyring 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.343 15:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.247 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.247 00:12:05.247 real 0m43.891s 00:12:05.247 user 2m20.280s 00:12:05.247 sys 0m7.607s 00:12:05.247 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.247 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:05.247 ************************************ 00:12:05.247 END TEST nvmf_filesystem 00:12:05.247 ************************************ 00:12:05.247 15:46:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:05.247 15:46:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:05.247 15:46:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.248 15:46:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.248 ************************************ 00:12:05.248 START TEST nvmf_target_discovery 00:12:05.248 ************************************ 00:12:05.248 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:05.507 * Looking for test storage... 00:12:05.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.507 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:05.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.508 --rc genhtml_branch_coverage=1 00:12:05.508 --rc genhtml_function_coverage=1 00:12:05.508 --rc genhtml_legend=1 00:12:05.508 --rc geninfo_all_blocks=1 00:12:05.508 --rc geninfo_unexecuted_blocks=1 00:12:05.508 00:12:05.508 ' 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:05.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.508 --rc genhtml_branch_coverage=1 00:12:05.508 --rc genhtml_function_coverage=1 00:12:05.508 --rc genhtml_legend=1 00:12:05.508 --rc geninfo_all_blocks=1 00:12:05.508 --rc geninfo_unexecuted_blocks=1 00:12:05.508 00:12:05.508 ' 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:05.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.508 --rc genhtml_branch_coverage=1 00:12:05.508 --rc genhtml_function_coverage=1 00:12:05.508 --rc genhtml_legend=1 00:12:05.508 --rc geninfo_all_blocks=1 00:12:05.508 --rc geninfo_unexecuted_blocks=1 00:12:05.508 00:12:05.508 ' 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:05.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.508 --rc genhtml_branch_coverage=1 00:12:05.508 --rc genhtml_function_coverage=1 00:12:05.508 --rc genhtml_legend=1 00:12:05.508 --rc geninfo_all_blocks=1 00:12:05.508 --rc geninfo_unexecuted_blocks=1 00:12:05.508 00:12:05.508 ' 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.508 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.509 15:46:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:12.076 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:12.076 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.076 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:12.077 Found net devices under 0000:86:00.0: cvl_0_0 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:12.077 Found net devices under 0000:86:00.1: cvl_0_1 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:12.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:12:12.077 00:12:12.077 --- 10.0.0.2 ping statistics --- 00:12:12.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.077 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:12:12.077 00:12:12.077 --- 10.0.0.1 ping statistics --- 00:12:12.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.077 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=2350539 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 2350539 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2350539 ']' 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:12.077 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.077 [2024-10-01 15:46:21.664014] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:12:12.077 [2024-10-01 15:46:21.664056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.077 [2024-10-01 15:46:21.733718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.077 [2024-10-01 15:46:21.813379] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.077 [2024-10-01 15:46:21.813417] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.077 [2024-10-01 15:46:21.813424] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.077 [2024-10-01 15:46:21.813431] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.077 [2024-10-01 15:46:21.813435] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.077 [2024-10-01 15:46:21.813490] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.077 [2024-10-01 15:46:21.813599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.077 [2024-10-01 15:46:21.813703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.077 [2024-10-01 15:46:21.813703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.337 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:12.337 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:12.337 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:12.337 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:12.337 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.597 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.597 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:12.597 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.597 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.597 [2024-10-01 15:46:22.549584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 Null1 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 [2024-10-01 15:46:22.595034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 Null2 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 Null3 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 Null4 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.598 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:12.859 00:12:12.859 Discovery Log Number of Records 6, Generation counter 6 00:12:12.859 =====Discovery Log Entry 0====== 00:12:12.859 trtype: tcp 00:12:12.859 adrfam: ipv4 00:12:12.859 subtype: current discovery subsystem 00:12:12.859 treq: not required 00:12:12.859 portid: 0 00:12:12.859 trsvcid: 4420 00:12:12.859 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:12.859 traddr: 10.0.0.2 00:12:12.859 eflags: explicit discovery connections, duplicate discovery information 00:12:12.859 sectype: none 00:12:12.859 =====Discovery Log Entry 1====== 00:12:12.859 trtype: tcp 00:12:12.859 adrfam: ipv4 00:12:12.859 subtype: nvme subsystem 00:12:12.859 treq: not required 00:12:12.859 portid: 0 00:12:12.859 trsvcid: 4420 00:12:12.859 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:12.859 traddr: 10.0.0.2 00:12:12.859 eflags: none 00:12:12.859 sectype: none 00:12:12.859 =====Discovery Log Entry 2====== 00:12:12.859 trtype: tcp 00:12:12.859 adrfam: ipv4 00:12:12.859 subtype: nvme subsystem 00:12:12.859 treq: not required 00:12:12.859 portid: 0 00:12:12.859 trsvcid: 4420 00:12:12.859 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:12.859 traddr: 10.0.0.2 00:12:12.859 eflags: none 00:12:12.859 sectype: none 00:12:12.859 =====Discovery Log Entry 3====== 00:12:12.859 trtype: tcp 00:12:12.859 adrfam: ipv4 00:12:12.859 subtype: nvme subsystem 00:12:12.859 treq: not required 00:12:12.859 portid: 0 00:12:12.859 trsvcid: 4420 00:12:12.859 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:12.859 traddr: 10.0.0.2 00:12:12.859 eflags: none 00:12:12.859 sectype: none 00:12:12.859 =====Discovery Log Entry 4====== 00:12:12.859 trtype: tcp 00:12:12.859 adrfam: ipv4 00:12:12.859 subtype: nvme subsystem 00:12:12.859 treq: not required 00:12:12.859 portid: 0 00:12:12.859 trsvcid: 4420 00:12:12.859 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:12.859 traddr: 10.0.0.2 00:12:12.859 eflags: none 00:12:12.859 sectype: none 00:12:12.859 =====Discovery Log Entry 5====== 00:12:12.859 trtype: tcp 00:12:12.859 adrfam: ipv4 00:12:12.859 subtype: discovery subsystem referral 00:12:12.859 treq: not required 00:12:12.859 portid: 0 00:12:12.859 trsvcid: 4430 00:12:12.859 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:12.859 traddr: 10.0.0.2 00:12:12.859 eflags: none 00:12:12.859 sectype: none 00:12:12.859 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:12.859 Perform nvmf subsystem discovery via RPC 00:12:12.859 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:12.859 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.859 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.859 [ 00:12:12.859 { 00:12:12.859 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:12.859 "subtype": "Discovery", 00:12:12.859 "listen_addresses": [ 00:12:12.859 { 00:12:12.859 "trtype": "TCP", 00:12:12.859 "adrfam": "IPv4", 00:12:12.859 "traddr": "10.0.0.2", 00:12:12.859 "trsvcid": "4420" 00:12:12.859 } 00:12:12.859 ], 00:12:12.859 "allow_any_host": true, 00:12:12.859 "hosts": [] 00:12:12.859 }, 00:12:12.859 { 00:12:12.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:12.859 "subtype": "NVMe", 00:12:12.859 "listen_addresses": [ 00:12:12.859 { 00:12:12.859 "trtype": "TCP", 00:12:12.859 "adrfam": "IPv4", 00:12:12.859 "traddr": "10.0.0.2", 00:12:12.859 "trsvcid": "4420" 00:12:12.859 } 00:12:12.859 ], 00:12:12.859 "allow_any_host": true, 00:12:12.859 "hosts": [], 00:12:12.859 "serial_number": "SPDK00000000000001", 00:12:12.859 "model_number": "SPDK bdev Controller", 00:12:12.859 "max_namespaces": 32, 00:12:12.859 "min_cntlid": 1, 00:12:12.859 "max_cntlid": 65519, 00:12:12.859 "namespaces": [ 00:12:12.859 { 00:12:12.859 "nsid": 1, 00:12:12.859 "bdev_name": "Null1", 00:12:12.859 "name": "Null1", 00:12:12.859 "nguid": "8E0022985D294545ACE64364247AD0E1", 00:12:12.859 "uuid": "8e002298-5d29-4545-ace6-4364247ad0e1" 00:12:12.859 } 00:12:12.859 ] 00:12:12.859 }, 00:12:12.859 { 00:12:12.859 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:12.859 "subtype": "NVMe", 00:12:12.859 "listen_addresses": [ 00:12:12.859 { 00:12:12.859 "trtype": "TCP", 00:12:12.859 "adrfam": "IPv4", 00:12:12.859 "traddr": "10.0.0.2", 00:12:12.859 "trsvcid": "4420" 00:12:12.859 } 00:12:12.859 ], 00:12:12.859 "allow_any_host": true, 00:12:12.859 "hosts": [], 00:12:12.859 "serial_number": "SPDK00000000000002", 00:12:12.859 "model_number": "SPDK bdev Controller", 00:12:12.859 "max_namespaces": 32, 00:12:12.859 "min_cntlid": 1, 00:12:12.859 "max_cntlid": 65519, 00:12:12.859 "namespaces": [ 00:12:12.859 { 00:12:12.859 "nsid": 1, 00:12:12.859 "bdev_name": "Null2", 00:12:12.859 "name": "Null2", 00:12:12.859 "nguid": "B8EB29850AD140589CFB4EBA67BE3911", 00:12:12.859 "uuid": "b8eb2985-0ad1-4058-9cfb-4eba67be3911" 00:12:12.859 } 00:12:12.859 ] 00:12:12.859 }, 00:12:12.859 { 00:12:12.859 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:12.859 "subtype": "NVMe", 00:12:12.859 "listen_addresses": [ 00:12:12.859 { 00:12:12.859 "trtype": "TCP", 00:12:12.859 "adrfam": "IPv4", 00:12:12.859 "traddr": "10.0.0.2", 00:12:12.859 "trsvcid": "4420" 00:12:12.859 } 00:12:12.859 ], 00:12:12.859 "allow_any_host": true, 00:12:12.859 "hosts": [], 00:12:12.859 "serial_number": "SPDK00000000000003", 00:12:12.859 "model_number": "SPDK bdev Controller", 00:12:12.859 "max_namespaces": 32, 00:12:12.859 "min_cntlid": 1, 00:12:12.859 "max_cntlid": 65519, 00:12:12.859 "namespaces": [ 00:12:12.859 { 00:12:12.859 "nsid": 1, 00:12:12.859 "bdev_name": "Null3", 00:12:12.859 "name": "Null3", 00:12:12.859 "nguid": "2F0AE21C939848B38BDB321267231887", 00:12:12.859 "uuid": "2f0ae21c-9398-48b3-8bdb-321267231887" 00:12:12.859 } 00:12:12.859 ] 00:12:12.859 }, 00:12:12.859 { 00:12:12.859 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:12.859 "subtype": "NVMe", 00:12:12.859 "listen_addresses": [ 00:12:12.859 { 00:12:12.859 "trtype": "TCP", 00:12:12.859 "adrfam": "IPv4", 00:12:12.859 "traddr": "10.0.0.2", 00:12:12.859 "trsvcid": "4420" 00:12:12.859 } 00:12:12.859 ], 00:12:12.859 "allow_any_host": true, 00:12:12.859 "hosts": [], 00:12:12.859 "serial_number": "SPDK00000000000004", 00:12:12.859 "model_number": "SPDK bdev Controller", 00:12:12.859 "max_namespaces": 32, 00:12:12.859 "min_cntlid": 1, 00:12:12.859 "max_cntlid": 65519, 00:12:12.859 "namespaces": [ 00:12:12.859 { 00:12:12.859 "nsid": 1, 00:12:12.859 "bdev_name": "Null4", 00:12:12.859 "name": "Null4", 00:12:12.859 "nguid": "D1EF33B968EA46A3A8D6C70E926ABCE3", 00:12:12.859 "uuid": "d1ef33b9-68ea-46a3-a8d6-c70e926abce3" 00:12:12.859 } 00:12:12.859 ] 00:12:12.859 } 00:12:12.859 ] 00:12:12.859 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.859 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:12.859 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:12.859 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.859 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.859 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.859 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.859 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:12.859 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.860 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.860 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.860 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:12.860 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:12.860 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:12.860 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:12.860 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:12.860 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:12.860 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:12.860 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:12.860 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:12.860 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:12.860 rmmod nvme_tcp 00:12:13.119 rmmod nvme_fabrics 00:12:13.119 rmmod nvme_keyring 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 2350539 ']' 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 2350539 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2350539 ']' 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2350539 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2350539 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2350539' 00:12:13.119 killing process with pid 2350539 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2350539 00:12:13.119 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2350539 00:12:13.379 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:13.379 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:13.379 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:13.379 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:13.379 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:12:13.379 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:13.379 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:12:13.379 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.379 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.379 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.379 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.379 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.350 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:15.350 00:12:15.350 real 0m10.004s 00:12:15.350 user 0m8.017s 00:12:15.350 sys 0m4.912s 00:12:15.350 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.350 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.350 ************************************ 00:12:15.350 END TEST nvmf_target_discovery 00:12:15.350 ************************************ 00:12:15.350 15:46:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:15.350 15:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:15.350 15:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.350 15:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:15.350 ************************************ 00:12:15.350 START TEST nvmf_referrals 00:12:15.350 ************************************ 00:12:15.350 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:15.645 * Looking for test storage... 00:12:15.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:15.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.645 --rc genhtml_branch_coverage=1 00:12:15.645 --rc genhtml_function_coverage=1 00:12:15.645 --rc genhtml_legend=1 00:12:15.645 --rc geninfo_all_blocks=1 00:12:15.645 --rc geninfo_unexecuted_blocks=1 00:12:15.645 00:12:15.645 ' 00:12:15.645 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:15.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.645 --rc genhtml_branch_coverage=1 00:12:15.645 --rc genhtml_function_coverage=1 00:12:15.645 --rc genhtml_legend=1 00:12:15.645 --rc geninfo_all_blocks=1 00:12:15.645 --rc geninfo_unexecuted_blocks=1 00:12:15.645 00:12:15.645 ' 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:15.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.646 --rc genhtml_branch_coverage=1 00:12:15.646 --rc genhtml_function_coverage=1 00:12:15.646 --rc genhtml_legend=1 00:12:15.646 --rc geninfo_all_blocks=1 00:12:15.646 --rc geninfo_unexecuted_blocks=1 00:12:15.646 00:12:15.646 ' 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:15.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.646 --rc genhtml_branch_coverage=1 00:12:15.646 --rc genhtml_function_coverage=1 00:12:15.646 --rc genhtml_legend=1 00:12:15.646 --rc geninfo_all_blocks=1 00:12:15.646 --rc geninfo_unexecuted_blocks=1 00:12:15.646 00:12:15.646 ' 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.646 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:22.224 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:22.224 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:22.224 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:22.225 Found net devices under 0000:86:00.0: cvl_0_0 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:22.225 Found net devices under 0000:86:00.1: cvl_0_1 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:22.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:12:22.225 00:12:22.225 --- 10.0.0.2 ping statistics --- 00:12:22.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.225 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:22.225 00:12:22.225 --- 10.0.0.1 ping statistics --- 00:12:22.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.225 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:22.225 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.226 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=2354327 00:12:22.226 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 2354327 00:12:22.226 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.226 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2354327 ']' 00:12:22.226 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.226 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:22.226 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.226 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:22.226 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.226 [2024-10-01 15:46:31.688762] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:12:22.226 [2024-10-01 15:46:31.688816] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.226 [2024-10-01 15:46:31.761680] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.226 [2024-10-01 15:46:31.842344] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.226 [2024-10-01 15:46:31.842380] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.226 [2024-10-01 15:46:31.842387] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.226 [2024-10-01 15:46:31.842393] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.226 [2024-10-01 15:46:31.842398] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.226 [2024-10-01 15:46:31.842451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.226 [2024-10-01 15:46:31.842480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.226 [2024-10-01 15:46:31.842585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.226 [2024-10-01 15:46:31.842585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.485 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:22.485 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:22.485 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:22.485 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:22.485 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.485 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.485 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.485 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.485 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.485 [2024-10-01 15:46:32.569564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.485 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.485 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:22.485 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.486 [2024-10-01 15:46:32.582906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.486 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:22.745 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:23.004 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:23.263 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:23.263 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:23.263 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:23.263 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:23.263 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:23.263 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:23.263 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:23.522 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:23.522 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:23.522 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:23.522 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:23.522 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:23.522 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:23.522 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:23.522 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:23.522 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.522 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:23.781 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:24.039 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:24.039 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:24.039 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:24.039 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:24.039 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:24.039 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:24.298 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.556 rmmod nvme_tcp 00:12:24.556 rmmod nvme_fabrics 00:12:24.556 rmmod nvme_keyring 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 2354327 ']' 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 2354327 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2354327 ']' 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2354327 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2354327 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2354327' 00:12:24.556 killing process with pid 2354327 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2354327 00:12:24.556 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2354327 00:12:24.816 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:24.816 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:24.816 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:24.816 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:24.816 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:12:24.816 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:24.816 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:12:24.816 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:24.816 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:24.816 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.816 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.816 15:46:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.352 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:27.352 00:12:27.352 real 0m11.459s 00:12:27.352 user 0m14.784s 00:12:27.352 sys 0m5.199s 00:12:27.352 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.352 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.352 ************************************ 00:12:27.352 END TEST nvmf_referrals 00:12:27.352 ************************************ 00:12:27.352 15:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:27.352 15:46:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:27.352 15:46:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.352 15:46:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:27.352 ************************************ 00:12:27.352 START TEST nvmf_connect_disconnect 00:12:27.352 ************************************ 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:27.353 * Looking for test storage... 00:12:27.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:27.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.353 --rc genhtml_branch_coverage=1 00:12:27.353 --rc genhtml_function_coverage=1 00:12:27.353 --rc genhtml_legend=1 00:12:27.353 --rc geninfo_all_blocks=1 00:12:27.353 --rc geninfo_unexecuted_blocks=1 00:12:27.353 00:12:27.353 ' 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:27.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.353 --rc genhtml_branch_coverage=1 00:12:27.353 --rc genhtml_function_coverage=1 00:12:27.353 --rc genhtml_legend=1 00:12:27.353 --rc geninfo_all_blocks=1 00:12:27.353 --rc geninfo_unexecuted_blocks=1 00:12:27.353 00:12:27.353 ' 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:27.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.353 --rc genhtml_branch_coverage=1 00:12:27.353 --rc genhtml_function_coverage=1 00:12:27.353 --rc genhtml_legend=1 00:12:27.353 --rc geninfo_all_blocks=1 00:12:27.353 --rc geninfo_unexecuted_blocks=1 00:12:27.353 00:12:27.353 ' 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:27.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.353 --rc genhtml_branch_coverage=1 00:12:27.353 --rc genhtml_function_coverage=1 00:12:27.353 --rc genhtml_legend=1 00:12:27.353 --rc geninfo_all_blocks=1 00:12:27.353 --rc geninfo_unexecuted_blocks=1 00:12:27.353 00:12:27.353 ' 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.353 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:27.354 15:46:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:33.916 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:33.917 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:33.917 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:33.917 Found net devices under 0000:86:00.0: cvl_0_0 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:33.917 Found net devices under 0000:86:00.1: cvl_0_1 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.917 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:33.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:12:33.917 00:12:33.917 --- 10.0.0.2 ping statistics --- 00:12:33.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.917 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:12:33.917 00:12:33.917 --- 10.0.0.1 ping statistics --- 00:12:33.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.917 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=2358424 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 2358424 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2358424 ']' 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.917 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.917 [2024-10-01 15:46:43.315031] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:12:33.917 [2024-10-01 15:46:43.315083] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.917 [2024-10-01 15:46:43.388014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.917 [2024-10-01 15:46:43.469426] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.917 [2024-10-01 15:46:43.469461] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.917 [2024-10-01 15:46:43.469468] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.917 [2024-10-01 15:46:43.469475] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.917 [2024-10-01 15:46:43.469480] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.917 [2024-10-01 15:46:43.469536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.917 [2024-10-01 15:46:43.469638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.917 [2024-10-01 15:46:43.469743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.917 [2024-10-01 15:46:43.469744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.176 [2024-10-01 15:46:44.194715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.176 [2024-10-01 15:46:44.246057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:34.176 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:37.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:50.590 rmmod nvme_tcp 00:12:50.590 rmmod nvme_fabrics 00:12:50.590 rmmod nvme_keyring 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 2358424 ']' 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 2358424 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2358424 ']' 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2358424 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.590 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2358424 00:12:50.849 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.849 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.849 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2358424' 00:12:50.849 killing process with pid 2358424 00:12:50.849 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2358424 00:12:50.849 15:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2358424 00:12:50.849 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:50.849 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:50.849 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:50.849 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:50.849 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:12:50.849 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:50.849 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:12:50.849 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:50.849 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:50.849 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.849 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.849 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:53.387 00:12:53.387 real 0m26.063s 00:12:53.387 user 1m11.547s 00:12:53.387 sys 0m5.819s 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.387 ************************************ 00:12:53.387 END TEST nvmf_connect_disconnect 00:12:53.387 ************************************ 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:53.387 ************************************ 00:12:53.387 START TEST nvmf_multitarget 00:12:53.387 ************************************ 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:53.387 * Looking for test storage... 00:12:53.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:53.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.387 --rc genhtml_branch_coverage=1 00:12:53.387 --rc genhtml_function_coverage=1 00:12:53.387 --rc genhtml_legend=1 00:12:53.387 --rc geninfo_all_blocks=1 00:12:53.387 --rc geninfo_unexecuted_blocks=1 00:12:53.387 00:12:53.387 ' 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:53.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.387 --rc genhtml_branch_coverage=1 00:12:53.387 --rc genhtml_function_coverage=1 00:12:53.387 --rc genhtml_legend=1 00:12:53.387 --rc geninfo_all_blocks=1 00:12:53.387 --rc geninfo_unexecuted_blocks=1 00:12:53.387 00:12:53.387 ' 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:53.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.387 --rc genhtml_branch_coverage=1 00:12:53.387 --rc genhtml_function_coverage=1 00:12:53.387 --rc genhtml_legend=1 00:12:53.387 --rc geninfo_all_blocks=1 00:12:53.387 --rc geninfo_unexecuted_blocks=1 00:12:53.387 00:12:53.387 ' 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:53.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.387 --rc genhtml_branch_coverage=1 00:12:53.387 --rc genhtml_function_coverage=1 00:12:53.387 --rc genhtml_legend=1 00:12:53.387 --rc geninfo_all_blocks=1 00:12:53.387 --rc geninfo_unexecuted_blocks=1 00:12:53.387 00:12:53.387 ' 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.387 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:53.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:53.388 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:59.957 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:59.957 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.957 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:59.958 Found net devices under 0000:86:00.0: cvl_0_0 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:59.958 Found net devices under 0000:86:00.1: cvl_0_1 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:59.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:12:59.958 00:12:59.958 --- 10.0.0.2 ping statistics --- 00:12:59.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.958 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:12:59.958 00:12:59.958 --- 10.0.0.1 ping statistics --- 00:12:59.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.958 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=2365027 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 2365027 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2365027 ']' 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.958 15:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:59.958 [2024-10-01 15:47:09.393522] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:12:59.959 [2024-10-01 15:47:09.393568] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.959 [2024-10-01 15:47:09.464501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.959 [2024-10-01 15:47:09.537378] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.959 [2024-10-01 15:47:09.537420] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.959 [2024-10-01 15:47:09.537427] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.959 [2024-10-01 15:47:09.537432] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.959 [2024-10-01 15:47:09.537437] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.959 [2024-10-01 15:47:09.537503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.959 [2024-10-01 15:47:09.537615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.959 [2024-10-01 15:47:09.537721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.959 [2024-10-01 15:47:09.537722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.217 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.217 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:13:00.217 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:00.217 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:00.217 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:00.217 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.217 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:00.217 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:00.217 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:00.217 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:00.217 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:00.475 "nvmf_tgt_1" 00:13:00.475 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:00.475 "nvmf_tgt_2" 00:13:00.475 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:00.475 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:00.733 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:00.733 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:00.733 true 00:13:00.733 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:00.733 true 00:13:00.991 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:00.991 15:47:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:00.991 rmmod nvme_tcp 00:13:00.991 rmmod nvme_fabrics 00:13:00.991 rmmod nvme_keyring 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 2365027 ']' 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 2365027 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2365027 ']' 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2365027 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2365027 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2365027' 00:13:00.991 killing process with pid 2365027 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2365027 00:13:00.991 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2365027 00:13:01.251 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:01.251 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:01.251 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:01.251 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:01.251 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:13:01.251 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:01.251 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:13:01.251 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:01.251 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:01.251 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.251 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.251 15:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:03.786 00:13:03.786 real 0m10.264s 00:13:03.786 user 0m9.826s 00:13:03.786 sys 0m4.919s 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:03.786 ************************************ 00:13:03.786 END TEST nvmf_multitarget 00:13:03.786 ************************************ 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.786 ************************************ 00:13:03.786 START TEST nvmf_rpc 00:13:03.786 ************************************ 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:03.786 * Looking for test storage... 00:13:03.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.786 --rc genhtml_branch_coverage=1 00:13:03.786 --rc genhtml_function_coverage=1 00:13:03.786 --rc genhtml_legend=1 00:13:03.786 --rc geninfo_all_blocks=1 00:13:03.786 --rc geninfo_unexecuted_blocks=1 00:13:03.786 00:13:03.786 ' 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.786 --rc genhtml_branch_coverage=1 00:13:03.786 --rc genhtml_function_coverage=1 00:13:03.786 --rc genhtml_legend=1 00:13:03.786 --rc geninfo_all_blocks=1 00:13:03.786 --rc geninfo_unexecuted_blocks=1 00:13:03.786 00:13:03.786 ' 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.786 --rc genhtml_branch_coverage=1 00:13:03.786 --rc genhtml_function_coverage=1 00:13:03.786 --rc genhtml_legend=1 00:13:03.786 --rc geninfo_all_blocks=1 00:13:03.786 --rc geninfo_unexecuted_blocks=1 00:13:03.786 00:13:03.786 ' 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.786 --rc genhtml_branch_coverage=1 00:13:03.786 --rc genhtml_function_coverage=1 00:13:03.786 --rc genhtml_legend=1 00:13:03.786 --rc geninfo_all_blocks=1 00:13:03.786 --rc geninfo_unexecuted_blocks=1 00:13:03.786 00:13:03.786 ' 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.786 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:03.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:03.787 15:47:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:10.352 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:10.352 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:10.352 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:10.353 Found net devices under 0000:86:00.0: cvl_0_0 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:10.353 Found net devices under 0000:86:00.1: cvl_0_1 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:10.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:13:10.353 00:13:10.353 --- 10.0.0.2 ping statistics --- 00:13:10.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.353 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:13:10.353 00:13:10.353 --- 10.0.0.1 ping statistics --- 00:13:10.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.353 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=2368828 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 2368828 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2368828 ']' 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:10.353 15:47:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.353 [2024-10-01 15:47:19.753548] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:13:10.353 [2024-10-01 15:47:19.753599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.353 [2024-10-01 15:47:19.828005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.353 [2024-10-01 15:47:19.903169] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.353 [2024-10-01 15:47:19.903206] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.353 [2024-10-01 15:47:19.903213] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.353 [2024-10-01 15:47:19.903219] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.353 [2024-10-01 15:47:19.903224] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.353 [2024-10-01 15:47:19.903288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.353 [2024-10-01 15:47:19.903398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.353 [2024-10-01 15:47:19.903506] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.353 [2024-10-01 15:47:19.903506] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:10.613 "tick_rate": 2100000000, 00:13:10.613 "poll_groups": [ 00:13:10.613 { 00:13:10.613 "name": "nvmf_tgt_poll_group_000", 00:13:10.613 "admin_qpairs": 0, 00:13:10.613 "io_qpairs": 0, 00:13:10.613 "current_admin_qpairs": 0, 00:13:10.613 "current_io_qpairs": 0, 00:13:10.613 "pending_bdev_io": 0, 00:13:10.613 "completed_nvme_io": 0, 00:13:10.613 "transports": [] 00:13:10.613 }, 00:13:10.613 { 00:13:10.613 "name": "nvmf_tgt_poll_group_001", 00:13:10.613 "admin_qpairs": 0, 00:13:10.613 "io_qpairs": 0, 00:13:10.613 "current_admin_qpairs": 0, 00:13:10.613 "current_io_qpairs": 0, 00:13:10.613 "pending_bdev_io": 0, 00:13:10.613 "completed_nvme_io": 0, 00:13:10.613 "transports": [] 00:13:10.613 }, 00:13:10.613 { 00:13:10.613 "name": "nvmf_tgt_poll_group_002", 00:13:10.613 "admin_qpairs": 0, 00:13:10.613 "io_qpairs": 0, 00:13:10.613 "current_admin_qpairs": 0, 00:13:10.613 "current_io_qpairs": 0, 00:13:10.613 "pending_bdev_io": 0, 00:13:10.613 "completed_nvme_io": 0, 00:13:10.613 "transports": [] 00:13:10.613 }, 00:13:10.613 { 00:13:10.613 "name": "nvmf_tgt_poll_group_003", 00:13:10.613 "admin_qpairs": 0, 00:13:10.613 "io_qpairs": 0, 00:13:10.613 "current_admin_qpairs": 0, 00:13:10.613 "current_io_qpairs": 0, 00:13:10.613 "pending_bdev_io": 0, 00:13:10.613 "completed_nvme_io": 0, 00:13:10.613 "transports": [] 00:13:10.613 } 00:13:10.613 ] 00:13:10.613 }' 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.613 [2024-10-01 15:47:20.748518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.613 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:10.613 "tick_rate": 2100000000, 00:13:10.613 "poll_groups": [ 00:13:10.613 { 00:13:10.613 "name": "nvmf_tgt_poll_group_000", 00:13:10.613 "admin_qpairs": 0, 00:13:10.613 "io_qpairs": 0, 00:13:10.613 "current_admin_qpairs": 0, 00:13:10.613 "current_io_qpairs": 0, 00:13:10.613 "pending_bdev_io": 0, 00:13:10.613 "completed_nvme_io": 0, 00:13:10.613 "transports": [ 00:13:10.613 { 00:13:10.613 "trtype": "TCP" 00:13:10.613 } 00:13:10.613 ] 00:13:10.613 }, 00:13:10.613 { 00:13:10.613 "name": "nvmf_tgt_poll_group_001", 00:13:10.613 "admin_qpairs": 0, 00:13:10.613 "io_qpairs": 0, 00:13:10.613 "current_admin_qpairs": 0, 00:13:10.613 "current_io_qpairs": 0, 00:13:10.613 "pending_bdev_io": 0, 00:13:10.613 "completed_nvme_io": 0, 00:13:10.613 "transports": [ 00:13:10.613 { 00:13:10.613 "trtype": "TCP" 00:13:10.613 } 00:13:10.613 ] 00:13:10.613 }, 00:13:10.613 { 00:13:10.613 "name": "nvmf_tgt_poll_group_002", 00:13:10.613 "admin_qpairs": 0, 00:13:10.613 "io_qpairs": 0, 00:13:10.613 "current_admin_qpairs": 0, 00:13:10.613 "current_io_qpairs": 0, 00:13:10.613 "pending_bdev_io": 0, 00:13:10.613 "completed_nvme_io": 0, 00:13:10.613 "transports": [ 00:13:10.613 { 00:13:10.613 "trtype": "TCP" 00:13:10.613 } 00:13:10.613 ] 00:13:10.613 }, 00:13:10.613 { 00:13:10.613 "name": "nvmf_tgt_poll_group_003", 00:13:10.613 "admin_qpairs": 0, 00:13:10.613 "io_qpairs": 0, 00:13:10.613 "current_admin_qpairs": 0, 00:13:10.613 "current_io_qpairs": 0, 00:13:10.613 "pending_bdev_io": 0, 00:13:10.613 "completed_nvme_io": 0, 00:13:10.614 "transports": [ 00:13:10.614 { 00:13:10.614 "trtype": "TCP" 00:13:10.614 } 00:13:10.614 ] 00:13:10.614 } 00:13:10.614 ] 00:13:10.614 }' 00:13:10.614 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:10.614 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:10.614 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:10.614 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.873 Malloc1 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.873 [2024-10-01 15:47:20.916384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:10.873 [2024-10-01 15:47:20.945048] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:13:10.873 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:10.873 could not add new controller: failed to write to nvme-fabrics device 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.873 15:47:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.250 15:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.250 15:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:12.250 15:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.250 15:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:12.250 15:47:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:14.152 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:14.153 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.153 [2024-10-01 15:47:24.299214] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:13:14.153 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:14.153 could not add new controller: failed to write to nvme-fabrics device 00:13:14.153 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:14.153 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:14.153 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:14.153 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:14.153 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:14.153 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.153 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.153 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.153 15:47:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.523 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.523 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:15.523 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.523 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:15.523 15:47:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.423 [2024-10-01 15:47:27.573999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.423 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.799 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.799 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:18.799 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.799 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:18.799 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.698 [2024-10-01 15:47:30.877018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.698 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.957 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.957 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.957 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.957 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.957 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.957 15:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:22.045 15:47:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:22.045 15:47:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:22.045 15:47:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.045 15:47:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:22.045 15:47:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:23.948 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:23.948 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:23.948 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.948 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:23.948 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.948 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:23.948 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.948 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.948 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:23.948 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:23.948 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.207 [2024-10-01 15:47:34.197842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.207 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.585 15:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:25.585 15:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:25.585 15:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.585 15:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:25.585 15:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:27.488 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.489 [2024-10-01 15:47:37.603248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.489 15:47:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.864 15:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:28.864 15:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:28.864 15:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:28.864 15:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:28.864 15:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:30.763 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:30.763 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:30.763 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:30.763 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:30.763 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:30.763 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:30.763 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.763 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:30.763 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:30.763 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:30.763 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.023 15:47:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.023 [2024-10-01 15:47:41.002676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.023 15:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.023 15:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:31.023 15:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.023 15:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.023 15:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.023 15:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:31.023 15:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.023 15:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.023 15:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.023 15:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:31.958 15:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.959 15:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:31.959 15:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.959 15:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:31.959 15:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.490 [2024-10-01 15:47:44.317128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.490 [2024-10-01 15:47:44.365212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.490 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 [2024-10-01 15:47:44.413335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 [2024-10-01 15:47:44.461510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 [2024-10-01 15:47:44.509685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.491 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:34.492 "tick_rate": 2100000000, 00:13:34.492 "poll_groups": [ 00:13:34.492 { 00:13:34.492 "name": "nvmf_tgt_poll_group_000", 00:13:34.492 "admin_qpairs": 2, 00:13:34.492 "io_qpairs": 168, 00:13:34.492 "current_admin_qpairs": 0, 00:13:34.492 "current_io_qpairs": 0, 00:13:34.492 "pending_bdev_io": 0, 00:13:34.492 "completed_nvme_io": 267, 00:13:34.492 "transports": [ 00:13:34.492 { 00:13:34.492 "trtype": "TCP" 00:13:34.492 } 00:13:34.492 ] 00:13:34.492 }, 00:13:34.492 { 00:13:34.492 "name": "nvmf_tgt_poll_group_001", 00:13:34.492 "admin_qpairs": 2, 00:13:34.492 "io_qpairs": 168, 00:13:34.492 "current_admin_qpairs": 0, 00:13:34.492 "current_io_qpairs": 0, 00:13:34.492 "pending_bdev_io": 0, 00:13:34.492 "completed_nvme_io": 268, 00:13:34.492 "transports": [ 00:13:34.492 { 00:13:34.492 "trtype": "TCP" 00:13:34.492 } 00:13:34.492 ] 00:13:34.492 }, 00:13:34.492 { 00:13:34.492 "name": "nvmf_tgt_poll_group_002", 00:13:34.492 "admin_qpairs": 1, 00:13:34.492 "io_qpairs": 168, 00:13:34.492 "current_admin_qpairs": 0, 00:13:34.492 "current_io_qpairs": 0, 00:13:34.492 "pending_bdev_io": 0, 00:13:34.492 "completed_nvme_io": 318, 00:13:34.492 "transports": [ 00:13:34.492 { 00:13:34.492 "trtype": "TCP" 00:13:34.492 } 00:13:34.492 ] 00:13:34.492 }, 00:13:34.492 { 00:13:34.492 "name": "nvmf_tgt_poll_group_003", 00:13:34.492 "admin_qpairs": 2, 00:13:34.492 "io_qpairs": 168, 00:13:34.492 "current_admin_qpairs": 0, 00:13:34.492 "current_io_qpairs": 0, 00:13:34.492 "pending_bdev_io": 0, 00:13:34.492 "completed_nvme_io": 169, 00:13:34.492 "transports": [ 00:13:34.492 { 00:13:34.492 "trtype": "TCP" 00:13:34.492 } 00:13:34.492 ] 00:13:34.492 } 00:13:34.492 ] 00:13:34.492 }' 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:34.492 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:34.492 rmmod nvme_tcp 00:13:34.492 rmmod nvme_fabrics 00:13:34.751 rmmod nvme_keyring 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 2368828 ']' 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 2368828 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2368828 ']' 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2368828 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2368828 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2368828' 00:13:34.751 killing process with pid 2368828 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2368828 00:13:34.751 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2368828 00:13:35.009 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:35.009 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:35.009 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:35.009 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:35.009 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:13:35.009 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:35.009 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:13:35.009 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:35.009 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:35.009 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.009 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.009 15:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.914 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:36.914 00:13:36.914 real 0m33.552s 00:13:36.914 user 1m41.770s 00:13:36.914 sys 0m6.495s 00:13:36.914 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.914 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.914 ************************************ 00:13:36.914 END TEST nvmf_rpc 00:13:36.914 ************************************ 00:13:36.914 15:47:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:36.914 15:47:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:36.914 15:47:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:36.914 15:47:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:37.173 ************************************ 00:13:37.173 START TEST nvmf_invalid 00:13:37.173 ************************************ 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:37.173 * Looking for test storage... 00:13:37.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.173 --rc genhtml_branch_coverage=1 00:13:37.173 --rc genhtml_function_coverage=1 00:13:37.173 --rc genhtml_legend=1 00:13:37.173 --rc geninfo_all_blocks=1 00:13:37.173 --rc geninfo_unexecuted_blocks=1 00:13:37.173 00:13:37.173 ' 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.173 --rc genhtml_branch_coverage=1 00:13:37.173 --rc genhtml_function_coverage=1 00:13:37.173 --rc genhtml_legend=1 00:13:37.173 --rc geninfo_all_blocks=1 00:13:37.173 --rc geninfo_unexecuted_blocks=1 00:13:37.173 00:13:37.173 ' 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.173 --rc genhtml_branch_coverage=1 00:13:37.173 --rc genhtml_function_coverage=1 00:13:37.173 --rc genhtml_legend=1 00:13:37.173 --rc geninfo_all_blocks=1 00:13:37.173 --rc geninfo_unexecuted_blocks=1 00:13:37.173 00:13:37.173 ' 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.173 --rc genhtml_branch_coverage=1 00:13:37.173 --rc genhtml_function_coverage=1 00:13:37.173 --rc genhtml_legend=1 00:13:37.173 --rc geninfo_all_blocks=1 00:13:37.173 --rc geninfo_unexecuted_blocks=1 00:13:37.173 00:13:37.173 ' 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:37.173 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:37.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:37.174 15:47:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:43.739 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:43.739 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:43.740 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:43.740 Found net devices under 0000:86:00.0: cvl_0_0 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:43.740 Found net devices under 0000:86:00.1: cvl_0_1 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:43.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:13:43.740 00:13:43.740 --- 10.0.0.2 ping statistics --- 00:13:43.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.740 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:13:43.740 00:13:43.740 --- 10.0.0.1 ping statistics --- 00:13:43.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.740 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=2376665 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 2376665 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2376665 ']' 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.740 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:43.740 [2024-10-01 15:47:53.392134] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:13:43.740 [2024-10-01 15:47:53.392179] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.740 [2024-10-01 15:47:53.464441] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:43.740 [2024-10-01 15:47:53.536251] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.740 [2024-10-01 15:47:53.536290] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.740 [2024-10-01 15:47:53.536297] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.740 [2024-10-01 15:47:53.536303] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.741 [2024-10-01 15:47:53.536311] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.741 [2024-10-01 15:47:53.536369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.741 [2024-10-01 15:47:53.536479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.741 [2024-10-01 15:47:53.536585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.741 [2024-10-01 15:47:53.536586] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.306 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:44.306 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:44.306 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:44.306 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:44.306 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:44.306 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.306 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:44.306 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28368 00:13:44.306 [2024-10-01 15:47:54.437333] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:44.306 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:44.306 { 00:13:44.306 "nqn": "nqn.2016-06.io.spdk:cnode28368", 00:13:44.306 "tgt_name": "foobar", 00:13:44.306 "method": "nvmf_create_subsystem", 00:13:44.306 "req_id": 1 00:13:44.306 } 00:13:44.306 Got JSON-RPC error response 00:13:44.306 response: 00:13:44.306 { 00:13:44.306 "code": -32603, 00:13:44.306 "message": "Unable to find target foobar" 00:13:44.306 }' 00:13:44.306 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:44.306 { 00:13:44.306 "nqn": "nqn.2016-06.io.spdk:cnode28368", 00:13:44.306 "tgt_name": "foobar", 00:13:44.306 "method": "nvmf_create_subsystem", 00:13:44.306 "req_id": 1 00:13:44.306 } 00:13:44.306 Got JSON-RPC error response 00:13:44.306 response: 00:13:44.306 { 00:13:44.306 "code": -32603, 00:13:44.306 "message": "Unable to find target foobar" 00:13:44.306 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:44.306 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:44.306 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23196 00:13:44.565 [2024-10-01 15:47:54.650109] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23196: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:44.565 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:44.565 { 00:13:44.565 "nqn": "nqn.2016-06.io.spdk:cnode23196", 00:13:44.565 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:44.565 "method": "nvmf_create_subsystem", 00:13:44.565 "req_id": 1 00:13:44.565 } 00:13:44.565 Got JSON-RPC error response 00:13:44.565 response: 00:13:44.565 { 00:13:44.565 "code": -32602, 00:13:44.565 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:44.565 }' 00:13:44.565 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:44.565 { 00:13:44.565 "nqn": "nqn.2016-06.io.spdk:cnode23196", 00:13:44.565 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:44.565 "method": "nvmf_create_subsystem", 00:13:44.565 "req_id": 1 00:13:44.565 } 00:13:44.565 Got JSON-RPC error response 00:13:44.565 response: 00:13:44.565 { 00:13:44.565 "code": -32602, 00:13:44.565 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:44.565 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:44.565 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:44.565 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1919 00:13:44.824 [2024-10-01 15:47:54.850762] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1919: invalid model number 'SPDK_Controller' 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:44.824 { 00:13:44.824 "nqn": "nqn.2016-06.io.spdk:cnode1919", 00:13:44.824 "model_number": "SPDK_Controller\u001f", 00:13:44.824 "method": "nvmf_create_subsystem", 00:13:44.824 "req_id": 1 00:13:44.824 } 00:13:44.824 Got JSON-RPC error response 00:13:44.824 response: 00:13:44.824 { 00:13:44.824 "code": -32602, 00:13:44.824 "message": "Invalid MN SPDK_Controller\u001f" 00:13:44.824 }' 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:44.824 { 00:13:44.824 "nqn": "nqn.2016-06.io.spdk:cnode1919", 00:13:44.824 "model_number": "SPDK_Controller\u001f", 00:13:44.824 "method": "nvmf_create_subsystem", 00:13:44.824 "req_id": 1 00:13:44.824 } 00:13:44.824 Got JSON-RPC error response 00:13:44.824 response: 00:13:44.824 { 00:13:44.824 "code": -32602, 00:13:44.824 "message": "Invalid MN SPDK_Controller\u001f" 00:13:44.824 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:44.824 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.825 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:44.825 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:44.825 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:44.825 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.825 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.825 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:44.825 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:44.825 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:44.825 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.825 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.825 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ J == \- ]] 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo J%-xO0tgpGOl_XUY9HG9% 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s J%-xO0tgpGOl_XUY9HG9% nqn.2016-06.io.spdk:cnode13872 00:13:45.084 [2024-10-01 15:47:55.191895] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13872: invalid serial number 'J%-xO0tgpGOl_XUY9HG9%' 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:45.084 { 00:13:45.084 "nqn": "nqn.2016-06.io.spdk:cnode13872", 00:13:45.084 "serial_number": "J%-xO0tgpGOl_XUY9HG9%", 00:13:45.084 "method": "nvmf_create_subsystem", 00:13:45.084 "req_id": 1 00:13:45.084 } 00:13:45.084 Got JSON-RPC error response 00:13:45.084 response: 00:13:45.084 { 00:13:45.084 "code": -32602, 00:13:45.084 "message": "Invalid SN J%-xO0tgpGOl_XUY9HG9%" 00:13:45.084 }' 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:45.084 { 00:13:45.084 "nqn": "nqn.2016-06.io.spdk:cnode13872", 00:13:45.084 "serial_number": "J%-xO0tgpGOl_XUY9HG9%", 00:13:45.084 "method": "nvmf_create_subsystem", 00:13:45.084 "req_id": 1 00:13:45.084 } 00:13:45.084 Got JSON-RPC error response 00:13:45.084 response: 00:13:45.084 { 00:13:45.084 "code": -32602, 00:13:45.084 "message": "Invalid SN J%-xO0tgpGOl_XUY9HG9%" 00:13:45.084 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.084 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:45.343 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:45.344 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ q == \- ]] 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'qCoiD*xb_>mZ!F[&M+g vAG?2iib{v0|6`7 2fPmE' 00:13:45.345 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'qCoiD*xb_>mZ!F[&M+g vAG?2iib{v0|6`7 2fPmE' nqn.2016-06.io.spdk:cnode9043 00:13:45.603 [2024-10-01 15:47:55.661463] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9043: invalid model number 'qCoiD*xb_>mZ!F[&M+g vAG?2iib{v0|6`7 2fPmE' 00:13:45.603 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:45.603 { 00:13:45.603 "nqn": "nqn.2016-06.io.spdk:cnode9043", 00:13:45.603 "model_number": "qCoiD*xb_>mZ!F[&M+g vAG?2iib{v0|6`7 2fPmE", 00:13:45.603 "method": "nvmf_create_subsystem", 00:13:45.603 "req_id": 1 00:13:45.603 } 00:13:45.603 Got JSON-RPC error response 00:13:45.603 response: 00:13:45.603 { 00:13:45.603 "code": -32602, 00:13:45.603 "message": "Invalid MN qCoiD*xb_>mZ!F[&M+g vAG?2iib{v0|6`7 2fPmE" 00:13:45.603 }' 00:13:45.603 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:45.603 { 00:13:45.603 "nqn": "nqn.2016-06.io.spdk:cnode9043", 00:13:45.603 "model_number": "qCoiD*xb_>mZ!F[&M+g vAG?2iib{v0|6`7 2fPmE", 00:13:45.603 "method": "nvmf_create_subsystem", 00:13:45.603 "req_id": 1 00:13:45.603 } 00:13:45.603 Got JSON-RPC error response 00:13:45.603 response: 00:13:45.603 { 00:13:45.603 "code": -32602, 00:13:45.603 "message": "Invalid MN qCoiD*xb_>mZ!F[&M+g vAG?2iib{v0|6`7 2fPmE" 00:13:45.603 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:45.603 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:45.861 [2024-10-01 15:47:55.874260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.861 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:46.119 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:46.119 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:46.119 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:46.119 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:46.119 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:46.119 [2024-10-01 15:47:56.271603] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:46.119 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:46.119 { 00:13:46.119 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:46.119 "listen_address": { 00:13:46.119 "trtype": "tcp", 00:13:46.119 "traddr": "", 00:13:46.119 "trsvcid": "4421" 00:13:46.119 }, 00:13:46.119 "method": "nvmf_subsystem_remove_listener", 00:13:46.119 "req_id": 1 00:13:46.119 } 00:13:46.119 Got JSON-RPC error response 00:13:46.119 response: 00:13:46.119 { 00:13:46.119 "code": -32602, 00:13:46.119 "message": "Invalid parameters" 00:13:46.119 }' 00:13:46.120 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:46.120 { 00:13:46.120 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:46.120 "listen_address": { 00:13:46.120 "trtype": "tcp", 00:13:46.120 "traddr": "", 00:13:46.120 "trsvcid": "4421" 00:13:46.120 }, 00:13:46.120 "method": "nvmf_subsystem_remove_listener", 00:13:46.120 "req_id": 1 00:13:46.120 } 00:13:46.120 Got JSON-RPC error response 00:13:46.120 response: 00:13:46.120 { 00:13:46.120 "code": -32602, 00:13:46.120 "message": "Invalid parameters" 00:13:46.120 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:46.120 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24002 -i 0 00:13:46.378 [2024-10-01 15:47:56.476233] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24002: invalid cntlid range [0-65519] 00:13:46.378 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:46.378 { 00:13:46.378 "nqn": "nqn.2016-06.io.spdk:cnode24002", 00:13:46.378 "min_cntlid": 0, 00:13:46.378 "method": "nvmf_create_subsystem", 00:13:46.378 "req_id": 1 00:13:46.378 } 00:13:46.378 Got JSON-RPC error response 00:13:46.378 response: 00:13:46.378 { 00:13:46.378 "code": -32602, 00:13:46.378 "message": "Invalid cntlid range [0-65519]" 00:13:46.378 }' 00:13:46.378 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:46.378 { 00:13:46.378 "nqn": "nqn.2016-06.io.spdk:cnode24002", 00:13:46.378 "min_cntlid": 0, 00:13:46.378 "method": "nvmf_create_subsystem", 00:13:46.378 "req_id": 1 00:13:46.378 } 00:13:46.378 Got JSON-RPC error response 00:13:46.378 response: 00:13:46.378 { 00:13:46.378 "code": -32602, 00:13:46.378 "message": "Invalid cntlid range [0-65519]" 00:13:46.378 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:46.378 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14015 -i 65520 00:13:46.636 [2024-10-01 15:47:56.668892] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14015: invalid cntlid range [65520-65519] 00:13:46.636 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:46.636 { 00:13:46.636 "nqn": "nqn.2016-06.io.spdk:cnode14015", 00:13:46.636 "min_cntlid": 65520, 00:13:46.636 "method": "nvmf_create_subsystem", 00:13:46.636 "req_id": 1 00:13:46.636 } 00:13:46.636 Got JSON-RPC error response 00:13:46.636 response: 00:13:46.636 { 00:13:46.636 "code": -32602, 00:13:46.636 "message": "Invalid cntlid range [65520-65519]" 00:13:46.636 }' 00:13:46.636 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:46.636 { 00:13:46.636 "nqn": "nqn.2016-06.io.spdk:cnode14015", 00:13:46.636 "min_cntlid": 65520, 00:13:46.636 "method": "nvmf_create_subsystem", 00:13:46.636 "req_id": 1 00:13:46.636 } 00:13:46.636 Got JSON-RPC error response 00:13:46.636 response: 00:13:46.636 { 00:13:46.636 "code": -32602, 00:13:46.636 "message": "Invalid cntlid range [65520-65519]" 00:13:46.636 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:46.636 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23391 -I 0 00:13:46.894 [2024-10-01 15:47:56.861482] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23391: invalid cntlid range [1-0] 00:13:46.894 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:46.894 { 00:13:46.894 "nqn": "nqn.2016-06.io.spdk:cnode23391", 00:13:46.894 "max_cntlid": 0, 00:13:46.894 "method": "nvmf_create_subsystem", 00:13:46.894 "req_id": 1 00:13:46.894 } 00:13:46.894 Got JSON-RPC error response 00:13:46.894 response: 00:13:46.894 { 00:13:46.894 "code": -32602, 00:13:46.894 "message": "Invalid cntlid range [1-0]" 00:13:46.894 }' 00:13:46.894 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:46.894 { 00:13:46.894 "nqn": "nqn.2016-06.io.spdk:cnode23391", 00:13:46.894 "max_cntlid": 0, 00:13:46.894 "method": "nvmf_create_subsystem", 00:13:46.894 "req_id": 1 00:13:46.894 } 00:13:46.894 Got JSON-RPC error response 00:13:46.894 response: 00:13:46.894 { 00:13:46.894 "code": -32602, 00:13:46.894 "message": "Invalid cntlid range [1-0]" 00:13:46.894 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:46.894 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6263 -I 65520 00:13:46.894 [2024-10-01 15:47:57.054122] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6263: invalid cntlid range [1-65520] 00:13:46.894 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:46.894 { 00:13:46.894 "nqn": "nqn.2016-06.io.spdk:cnode6263", 00:13:46.894 "max_cntlid": 65520, 00:13:46.894 "method": "nvmf_create_subsystem", 00:13:46.895 "req_id": 1 00:13:46.895 } 00:13:46.895 Got JSON-RPC error response 00:13:46.895 response: 00:13:46.895 { 00:13:46.895 "code": -32602, 00:13:46.895 "message": "Invalid cntlid range [1-65520]" 00:13:46.895 }' 00:13:46.895 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:46.895 { 00:13:46.895 "nqn": "nqn.2016-06.io.spdk:cnode6263", 00:13:46.895 "max_cntlid": 65520, 00:13:46.895 "method": "nvmf_create_subsystem", 00:13:46.895 "req_id": 1 00:13:46.895 } 00:13:46.895 Got JSON-RPC error response 00:13:46.895 response: 00:13:46.895 { 00:13:46.895 "code": -32602, 00:13:46.895 "message": "Invalid cntlid range [1-65520]" 00:13:46.895 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:47.154 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29997 -i 6 -I 5 00:13:47.154 [2024-10-01 15:47:57.266895] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29997: invalid cntlid range [6-5] 00:13:47.154 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:47.154 { 00:13:47.154 "nqn": "nqn.2016-06.io.spdk:cnode29997", 00:13:47.154 "min_cntlid": 6, 00:13:47.154 "max_cntlid": 5, 00:13:47.154 "method": "nvmf_create_subsystem", 00:13:47.154 "req_id": 1 00:13:47.154 } 00:13:47.154 Got JSON-RPC error response 00:13:47.154 response: 00:13:47.154 { 00:13:47.154 "code": -32602, 00:13:47.154 "message": "Invalid cntlid range [6-5]" 00:13:47.154 }' 00:13:47.154 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:47.154 { 00:13:47.154 "nqn": "nqn.2016-06.io.spdk:cnode29997", 00:13:47.154 "min_cntlid": 6, 00:13:47.154 "max_cntlid": 5, 00:13:47.154 "method": "nvmf_create_subsystem", 00:13:47.154 "req_id": 1 00:13:47.154 } 00:13:47.154 Got JSON-RPC error response 00:13:47.154 response: 00:13:47.154 { 00:13:47.154 "code": -32602, 00:13:47.154 "message": "Invalid cntlid range [6-5]" 00:13:47.154 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:47.154 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:47.413 { 00:13:47.413 "name": "foobar", 00:13:47.413 "method": "nvmf_delete_target", 00:13:47.413 "req_id": 1 00:13:47.413 } 00:13:47.413 Got JSON-RPC error response 00:13:47.413 response: 00:13:47.413 { 00:13:47.413 "code": -32602, 00:13:47.413 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:47.413 }' 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:47.413 { 00:13:47.413 "name": "foobar", 00:13:47.413 "method": "nvmf_delete_target", 00:13:47.413 "req_id": 1 00:13:47.413 } 00:13:47.413 Got JSON-RPC error response 00:13:47.413 response: 00:13:47.413 { 00:13:47.413 "code": -32602, 00:13:47.413 "message": "The specified target doesn't exist, cannot delete it." 00:13:47.413 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:47.413 rmmod nvme_tcp 00:13:47.413 rmmod nvme_fabrics 00:13:47.413 rmmod nvme_keyring 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 2376665 ']' 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 2376665 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2376665 ']' 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2376665 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2376665 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2376665' 00:13:47.413 killing process with pid 2376665 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2376665 00:13:47.413 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2376665 00:13:47.672 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:47.672 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:47.672 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:47.672 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:47.672 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:13:47.672 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:47.672 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:13:47.672 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:47.672 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:47.672 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.672 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.672 15:47:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.206 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:50.206 00:13:50.206 real 0m12.652s 00:13:50.206 user 0m20.908s 00:13:50.206 sys 0m5.494s 00:13:50.206 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.206 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:50.206 ************************************ 00:13:50.206 END TEST nvmf_invalid 00:13:50.206 ************************************ 00:13:50.206 15:47:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:50.206 15:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:50.206 15:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:50.206 15:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:50.206 ************************************ 00:13:50.206 START TEST nvmf_connect_stress 00:13:50.206 ************************************ 00:13:50.206 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:50.206 * Looking for test storage... 00:13:50.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.206 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:50.206 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:13:50.206 15:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:50.206 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:50.206 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:50.206 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:50.206 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:50.206 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.206 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:50.206 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:50.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.207 --rc genhtml_branch_coverage=1 00:13:50.207 --rc genhtml_function_coverage=1 00:13:50.207 --rc genhtml_legend=1 00:13:50.207 --rc geninfo_all_blocks=1 00:13:50.207 --rc geninfo_unexecuted_blocks=1 00:13:50.207 00:13:50.207 ' 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:50.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.207 --rc genhtml_branch_coverage=1 00:13:50.207 --rc genhtml_function_coverage=1 00:13:50.207 --rc genhtml_legend=1 00:13:50.207 --rc geninfo_all_blocks=1 00:13:50.207 --rc geninfo_unexecuted_blocks=1 00:13:50.207 00:13:50.207 ' 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:50.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.207 --rc genhtml_branch_coverage=1 00:13:50.207 --rc genhtml_function_coverage=1 00:13:50.207 --rc genhtml_legend=1 00:13:50.207 --rc geninfo_all_blocks=1 00:13:50.207 --rc geninfo_unexecuted_blocks=1 00:13:50.207 00:13:50.207 ' 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:50.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.207 --rc genhtml_branch_coverage=1 00:13:50.207 --rc genhtml_function_coverage=1 00:13:50.207 --rc genhtml_legend=1 00:13:50.207 --rc geninfo_all_blocks=1 00:13:50.207 --rc geninfo_unexecuted_blocks=1 00:13:50.207 00:13:50.207 ' 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:50.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.207 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:50.208 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:50.208 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:50.208 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:56.777 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:56.777 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:56.777 Found net devices under 0000:86:00.0: cvl_0_0 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:56.777 Found net devices under 0000:86:00.1: cvl_0_1 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.777 15:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.777 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.777 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:56.777 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:56.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:13:56.778 00:13:56.778 --- 10.0.0.2 ping statistics --- 00:13:56.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.778 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:13:56.778 00:13:56.778 --- 10.0.0.1 ping statistics --- 00:13:56.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.778 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=2381174 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 2381174 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2381174 ']' 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.778 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.778 [2024-10-01 15:48:06.133573] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:13:56.778 [2024-10-01 15:48:06.133622] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.778 [2024-10-01 15:48:06.204560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:56.778 [2024-10-01 15:48:06.284611] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.778 [2024-10-01 15:48:06.284651] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.778 [2024-10-01 15:48:06.284659] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.778 [2024-10-01 15:48:06.284665] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.778 [2024-10-01 15:48:06.284671] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.778 [2024-10-01 15:48:06.284725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.778 [2024-10-01 15:48:06.284759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.778 [2024-10-01 15:48:06.284761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:57.037 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:57.037 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:57.037 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:57.037 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:57.037 15:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.037 [2024-10-01 15:48:07.019161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.037 [2024-10-01 15:48:07.049249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.037 NULL1 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2381273 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.037 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.296 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.296 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:13:57.296 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.296 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.296 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.863 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.863 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:13:57.863 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.863 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.863 15:48:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.121 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.121 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:13:58.122 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.122 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.122 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.380 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.380 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:13:58.380 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.380 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.380 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.638 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.638 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:13:58.638 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.638 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.638 15:48:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.204 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.204 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:13:59.204 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.204 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.204 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.462 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.462 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:13:59.462 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.462 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.462 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.720 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.720 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:13:59.720 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.720 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.720 15:48:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.979 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.979 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:13:59.979 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.979 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.979 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.236 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.236 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:00.236 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.236 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.236 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.801 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.801 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:00.801 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.801 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.801 15:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.058 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.058 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:01.058 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.058 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.058 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.316 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.316 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:01.316 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.316 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.316 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.573 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.573 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:01.573 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.573 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.573 15:48:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.139 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.139 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:02.139 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.139 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.139 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.398 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.398 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:02.398 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.398 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.398 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.656 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.656 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:02.656 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.656 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.656 15:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.914 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.914 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:02.914 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.914 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.914 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.172 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.172 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:03.172 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.172 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.172 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.738 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.738 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:03.738 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.738 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.738 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.997 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.997 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:03.997 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.997 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.997 15:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.255 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.255 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:04.255 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.255 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.255 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.514 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.514 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:04.514 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.514 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.514 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.079 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.080 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:05.080 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.080 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.080 15:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.338 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.338 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:05.338 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.338 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.338 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.597 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.597 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:05.597 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.597 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.597 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.857 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.857 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:05.857 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.857 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.857 15:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.114 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.114 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:06.114 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.114 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.115 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.680 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.680 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:06.680 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.680 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.680 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.938 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.938 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:06.938 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.938 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.938 15:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.196 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2381273 00:14:07.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2381273) - No such process 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2381273 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:07.196 rmmod nvme_tcp 00:14:07.196 rmmod nvme_fabrics 00:14:07.196 rmmod nvme_keyring 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 2381174 ']' 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 2381174 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2381174 ']' 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2381174 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2381174 00:14:07.196 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:07.197 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:07.197 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2381174' 00:14:07.197 killing process with pid 2381174 00:14:07.197 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2381174 00:14:07.197 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2381174 00:14:07.455 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:07.455 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:07.455 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:07.455 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:07.455 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:14:07.455 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:07.455 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:14:07.455 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:07.455 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:07.455 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.455 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.455 15:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.986 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:09.986 00:14:09.986 real 0m19.762s 00:14:09.986 user 0m41.475s 00:14:09.986 sys 0m8.583s 00:14:09.986 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.986 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.986 ************************************ 00:14:09.986 END TEST nvmf_connect_stress 00:14:09.986 ************************************ 00:14:09.986 15:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:09.986 15:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:09.986 15:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.986 15:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.986 ************************************ 00:14:09.986 START TEST nvmf_fused_ordering 00:14:09.986 ************************************ 00:14:09.986 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:09.986 * Looking for test storage... 00:14:09.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.986 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:09.986 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:09.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.987 --rc genhtml_branch_coverage=1 00:14:09.987 --rc genhtml_function_coverage=1 00:14:09.987 --rc genhtml_legend=1 00:14:09.987 --rc geninfo_all_blocks=1 00:14:09.987 --rc geninfo_unexecuted_blocks=1 00:14:09.987 00:14:09.987 ' 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:09.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.987 --rc genhtml_branch_coverage=1 00:14:09.987 --rc genhtml_function_coverage=1 00:14:09.987 --rc genhtml_legend=1 00:14:09.987 --rc geninfo_all_blocks=1 00:14:09.987 --rc geninfo_unexecuted_blocks=1 00:14:09.987 00:14:09.987 ' 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:09.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.987 --rc genhtml_branch_coverage=1 00:14:09.987 --rc genhtml_function_coverage=1 00:14:09.987 --rc genhtml_legend=1 00:14:09.987 --rc geninfo_all_blocks=1 00:14:09.987 --rc geninfo_unexecuted_blocks=1 00:14:09.987 00:14:09.987 ' 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:09.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.987 --rc genhtml_branch_coverage=1 00:14:09.987 --rc genhtml_function_coverage=1 00:14:09.987 --rc genhtml_legend=1 00:14:09.987 --rc geninfo_all_blocks=1 00:14:09.987 --rc geninfo_unexecuted_blocks=1 00:14:09.987 00:14:09.987 ' 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:09.987 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:09.988 15:48:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:16.561 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:16.561 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:16.562 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:16.562 Found net devices under 0000:86:00.0: cvl_0_0 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:16.562 Found net devices under 0000:86:00.1: cvl_0_1 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:16.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:14:16.562 00:14:16.562 --- 10.0.0.2 ping statistics --- 00:14:16.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.562 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:14:16.562 00:14:16.562 --- 10.0.0.1 ping statistics --- 00:14:16.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.562 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=2386970 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 2386970 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2386970 ']' 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.562 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.563 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.563 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.563 15:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.563 [2024-10-01 15:48:25.926401] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:14:16.563 [2024-10-01 15:48:25.926443] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.563 [2024-10-01 15:48:25.996473] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.563 [2024-10-01 15:48:26.074234] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.563 [2024-10-01 15:48:26.074272] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.563 [2024-10-01 15:48:26.074280] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.563 [2024-10-01 15:48:26.074286] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.563 [2024-10-01 15:48:26.074294] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.563 [2024-10-01 15:48:26.074312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.878 [2024-10-01 15:48:26.801037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.878 [2024-10-01 15:48:26.821243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.878 NULL1 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:16.878 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.879 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.879 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.879 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:16.879 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.879 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:16.879 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.879 15:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:16.879 [2024-10-01 15:48:26.875604] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:14:16.879 [2024-10-01 15:48:26.875637] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2387106 ] 00:14:17.171 Attached to nqn.2016-06.io.spdk:cnode1 00:14:17.171 Namespace ID: 1 size: 1GB 00:14:17.171 fused_ordering(0) 00:14:17.171 fused_ordering(1) 00:14:17.171 fused_ordering(2) 00:14:17.171 fused_ordering(3) 00:14:17.171 fused_ordering(4) 00:14:17.171 fused_ordering(5) 00:14:17.171 fused_ordering(6) 00:14:17.171 fused_ordering(7) 00:14:17.171 fused_ordering(8) 00:14:17.171 fused_ordering(9) 00:14:17.171 fused_ordering(10) 00:14:17.171 fused_ordering(11) 00:14:17.171 fused_ordering(12) 00:14:17.171 fused_ordering(13) 00:14:17.171 fused_ordering(14) 00:14:17.171 fused_ordering(15) 00:14:17.171 fused_ordering(16) 00:14:17.171 fused_ordering(17) 00:14:17.171 fused_ordering(18) 00:14:17.171 fused_ordering(19) 00:14:17.171 fused_ordering(20) 00:14:17.171 fused_ordering(21) 00:14:17.171 fused_ordering(22) 00:14:17.171 fused_ordering(23) 00:14:17.171 fused_ordering(24) 00:14:17.171 fused_ordering(25) 00:14:17.171 fused_ordering(26) 00:14:17.171 fused_ordering(27) 00:14:17.171 fused_ordering(28) 00:14:17.171 fused_ordering(29) 00:14:17.171 fused_ordering(30) 00:14:17.171 fused_ordering(31) 00:14:17.171 fused_ordering(32) 00:14:17.171 fused_ordering(33) 00:14:17.171 fused_ordering(34) 00:14:17.171 fused_ordering(35) 00:14:17.171 fused_ordering(36) 00:14:17.171 fused_ordering(37) 00:14:17.171 fused_ordering(38) 00:14:17.171 fused_ordering(39) 00:14:17.171 fused_ordering(40) 00:14:17.171 fused_ordering(41) 00:14:17.171 fused_ordering(42) 00:14:17.171 fused_ordering(43) 00:14:17.171 fused_ordering(44) 00:14:17.171 fused_ordering(45) 00:14:17.171 fused_ordering(46) 00:14:17.171 fused_ordering(47) 00:14:17.171 fused_ordering(48) 00:14:17.171 fused_ordering(49) 00:14:17.171 fused_ordering(50) 00:14:17.171 fused_ordering(51) 00:14:17.171 fused_ordering(52) 00:14:17.171 fused_ordering(53) 00:14:17.171 fused_ordering(54) 00:14:17.171 fused_ordering(55) 00:14:17.171 fused_ordering(56) 00:14:17.171 fused_ordering(57) 00:14:17.171 fused_ordering(58) 00:14:17.171 fused_ordering(59) 00:14:17.171 fused_ordering(60) 00:14:17.171 fused_ordering(61) 00:14:17.171 fused_ordering(62) 00:14:17.171 fused_ordering(63) 00:14:17.171 fused_ordering(64) 00:14:17.171 fused_ordering(65) 00:14:17.171 fused_ordering(66) 00:14:17.171 fused_ordering(67) 00:14:17.171 fused_ordering(68) 00:14:17.171 fused_ordering(69) 00:14:17.171 fused_ordering(70) 00:14:17.171 fused_ordering(71) 00:14:17.171 fused_ordering(72) 00:14:17.171 fused_ordering(73) 00:14:17.171 fused_ordering(74) 00:14:17.171 fused_ordering(75) 00:14:17.171 fused_ordering(76) 00:14:17.171 fused_ordering(77) 00:14:17.171 fused_ordering(78) 00:14:17.171 fused_ordering(79) 00:14:17.171 fused_ordering(80) 00:14:17.171 fused_ordering(81) 00:14:17.171 fused_ordering(82) 00:14:17.171 fused_ordering(83) 00:14:17.171 fused_ordering(84) 00:14:17.171 fused_ordering(85) 00:14:17.171 fused_ordering(86) 00:14:17.171 fused_ordering(87) 00:14:17.171 fused_ordering(88) 00:14:17.171 fused_ordering(89) 00:14:17.171 fused_ordering(90) 00:14:17.171 fused_ordering(91) 00:14:17.171 fused_ordering(92) 00:14:17.171 fused_ordering(93) 00:14:17.171 fused_ordering(94) 00:14:17.171 fused_ordering(95) 00:14:17.171 fused_ordering(96) 00:14:17.171 fused_ordering(97) 00:14:17.171 fused_ordering(98) 00:14:17.171 fused_ordering(99) 00:14:17.171 fused_ordering(100) 00:14:17.171 fused_ordering(101) 00:14:17.171 fused_ordering(102) 00:14:17.171 fused_ordering(103) 00:14:17.171 fused_ordering(104) 00:14:17.171 fused_ordering(105) 00:14:17.171 fused_ordering(106) 00:14:17.171 fused_ordering(107) 00:14:17.171 fused_ordering(108) 00:14:17.171 fused_ordering(109) 00:14:17.171 fused_ordering(110) 00:14:17.171 fused_ordering(111) 00:14:17.171 fused_ordering(112) 00:14:17.171 fused_ordering(113) 00:14:17.171 fused_ordering(114) 00:14:17.171 fused_ordering(115) 00:14:17.171 fused_ordering(116) 00:14:17.171 fused_ordering(117) 00:14:17.171 fused_ordering(118) 00:14:17.171 fused_ordering(119) 00:14:17.171 fused_ordering(120) 00:14:17.171 fused_ordering(121) 00:14:17.171 fused_ordering(122) 00:14:17.171 fused_ordering(123) 00:14:17.171 fused_ordering(124) 00:14:17.171 fused_ordering(125) 00:14:17.171 fused_ordering(126) 00:14:17.171 fused_ordering(127) 00:14:17.171 fused_ordering(128) 00:14:17.171 fused_ordering(129) 00:14:17.171 fused_ordering(130) 00:14:17.171 fused_ordering(131) 00:14:17.171 fused_ordering(132) 00:14:17.171 fused_ordering(133) 00:14:17.171 fused_ordering(134) 00:14:17.171 fused_ordering(135) 00:14:17.171 fused_ordering(136) 00:14:17.171 fused_ordering(137) 00:14:17.171 fused_ordering(138) 00:14:17.171 fused_ordering(139) 00:14:17.171 fused_ordering(140) 00:14:17.171 fused_ordering(141) 00:14:17.171 fused_ordering(142) 00:14:17.171 fused_ordering(143) 00:14:17.171 fused_ordering(144) 00:14:17.171 fused_ordering(145) 00:14:17.171 fused_ordering(146) 00:14:17.171 fused_ordering(147) 00:14:17.171 fused_ordering(148) 00:14:17.171 fused_ordering(149) 00:14:17.171 fused_ordering(150) 00:14:17.171 fused_ordering(151) 00:14:17.171 fused_ordering(152) 00:14:17.171 fused_ordering(153) 00:14:17.171 fused_ordering(154) 00:14:17.171 fused_ordering(155) 00:14:17.171 fused_ordering(156) 00:14:17.171 fused_ordering(157) 00:14:17.171 fused_ordering(158) 00:14:17.171 fused_ordering(159) 00:14:17.171 fused_ordering(160) 00:14:17.171 fused_ordering(161) 00:14:17.171 fused_ordering(162) 00:14:17.171 fused_ordering(163) 00:14:17.171 fused_ordering(164) 00:14:17.171 fused_ordering(165) 00:14:17.171 fused_ordering(166) 00:14:17.171 fused_ordering(167) 00:14:17.171 fused_ordering(168) 00:14:17.171 fused_ordering(169) 00:14:17.171 fused_ordering(170) 00:14:17.171 fused_ordering(171) 00:14:17.171 fused_ordering(172) 00:14:17.171 fused_ordering(173) 00:14:17.171 fused_ordering(174) 00:14:17.171 fused_ordering(175) 00:14:17.171 fused_ordering(176) 00:14:17.171 fused_ordering(177) 00:14:17.171 fused_ordering(178) 00:14:17.171 fused_ordering(179) 00:14:17.171 fused_ordering(180) 00:14:17.171 fused_ordering(181) 00:14:17.171 fused_ordering(182) 00:14:17.171 fused_ordering(183) 00:14:17.171 fused_ordering(184) 00:14:17.171 fused_ordering(185) 00:14:17.171 fused_ordering(186) 00:14:17.171 fused_ordering(187) 00:14:17.171 fused_ordering(188) 00:14:17.171 fused_ordering(189) 00:14:17.171 fused_ordering(190) 00:14:17.171 fused_ordering(191) 00:14:17.171 fused_ordering(192) 00:14:17.171 fused_ordering(193) 00:14:17.171 fused_ordering(194) 00:14:17.171 fused_ordering(195) 00:14:17.171 fused_ordering(196) 00:14:17.171 fused_ordering(197) 00:14:17.171 fused_ordering(198) 00:14:17.171 fused_ordering(199) 00:14:17.171 fused_ordering(200) 00:14:17.171 fused_ordering(201) 00:14:17.171 fused_ordering(202) 00:14:17.171 fused_ordering(203) 00:14:17.171 fused_ordering(204) 00:14:17.171 fused_ordering(205) 00:14:17.431 fused_ordering(206) 00:14:17.431 fused_ordering(207) 00:14:17.431 fused_ordering(208) 00:14:17.431 fused_ordering(209) 00:14:17.431 fused_ordering(210) 00:14:17.431 fused_ordering(211) 00:14:17.431 fused_ordering(212) 00:14:17.431 fused_ordering(213) 00:14:17.431 fused_ordering(214) 00:14:17.431 fused_ordering(215) 00:14:17.431 fused_ordering(216) 00:14:17.431 fused_ordering(217) 00:14:17.431 fused_ordering(218) 00:14:17.431 fused_ordering(219) 00:14:17.431 fused_ordering(220) 00:14:17.431 fused_ordering(221) 00:14:17.431 fused_ordering(222) 00:14:17.431 fused_ordering(223) 00:14:17.431 fused_ordering(224) 00:14:17.431 fused_ordering(225) 00:14:17.431 fused_ordering(226) 00:14:17.431 fused_ordering(227) 00:14:17.431 fused_ordering(228) 00:14:17.431 fused_ordering(229) 00:14:17.431 fused_ordering(230) 00:14:17.431 fused_ordering(231) 00:14:17.431 fused_ordering(232) 00:14:17.431 fused_ordering(233) 00:14:17.431 fused_ordering(234) 00:14:17.431 fused_ordering(235) 00:14:17.431 fused_ordering(236) 00:14:17.431 fused_ordering(237) 00:14:17.431 fused_ordering(238) 00:14:17.431 fused_ordering(239) 00:14:17.431 fused_ordering(240) 00:14:17.431 fused_ordering(241) 00:14:17.431 fused_ordering(242) 00:14:17.431 fused_ordering(243) 00:14:17.431 fused_ordering(244) 00:14:17.431 fused_ordering(245) 00:14:17.431 fused_ordering(246) 00:14:17.431 fused_ordering(247) 00:14:17.431 fused_ordering(248) 00:14:17.431 fused_ordering(249) 00:14:17.431 fused_ordering(250) 00:14:17.431 fused_ordering(251) 00:14:17.431 fused_ordering(252) 00:14:17.431 fused_ordering(253) 00:14:17.431 fused_ordering(254) 00:14:17.431 fused_ordering(255) 00:14:17.431 fused_ordering(256) 00:14:17.431 fused_ordering(257) 00:14:17.431 fused_ordering(258) 00:14:17.431 fused_ordering(259) 00:14:17.431 fused_ordering(260) 00:14:17.431 fused_ordering(261) 00:14:17.431 fused_ordering(262) 00:14:17.431 fused_ordering(263) 00:14:17.431 fused_ordering(264) 00:14:17.431 fused_ordering(265) 00:14:17.431 fused_ordering(266) 00:14:17.431 fused_ordering(267) 00:14:17.431 fused_ordering(268) 00:14:17.431 fused_ordering(269) 00:14:17.431 fused_ordering(270) 00:14:17.431 fused_ordering(271) 00:14:17.431 fused_ordering(272) 00:14:17.431 fused_ordering(273) 00:14:17.431 fused_ordering(274) 00:14:17.431 fused_ordering(275) 00:14:17.431 fused_ordering(276) 00:14:17.431 fused_ordering(277) 00:14:17.431 fused_ordering(278) 00:14:17.431 fused_ordering(279) 00:14:17.431 fused_ordering(280) 00:14:17.431 fused_ordering(281) 00:14:17.431 fused_ordering(282) 00:14:17.431 fused_ordering(283) 00:14:17.431 fused_ordering(284) 00:14:17.431 fused_ordering(285) 00:14:17.431 fused_ordering(286) 00:14:17.431 fused_ordering(287) 00:14:17.431 fused_ordering(288) 00:14:17.431 fused_ordering(289) 00:14:17.431 fused_ordering(290) 00:14:17.431 fused_ordering(291) 00:14:17.431 fused_ordering(292) 00:14:17.431 fused_ordering(293) 00:14:17.431 fused_ordering(294) 00:14:17.431 fused_ordering(295) 00:14:17.431 fused_ordering(296) 00:14:17.431 fused_ordering(297) 00:14:17.431 fused_ordering(298) 00:14:17.431 fused_ordering(299) 00:14:17.431 fused_ordering(300) 00:14:17.431 fused_ordering(301) 00:14:17.431 fused_ordering(302) 00:14:17.431 fused_ordering(303) 00:14:17.431 fused_ordering(304) 00:14:17.431 fused_ordering(305) 00:14:17.431 fused_ordering(306) 00:14:17.431 fused_ordering(307) 00:14:17.431 fused_ordering(308) 00:14:17.431 fused_ordering(309) 00:14:17.431 fused_ordering(310) 00:14:17.431 fused_ordering(311) 00:14:17.431 fused_ordering(312) 00:14:17.431 fused_ordering(313) 00:14:17.431 fused_ordering(314) 00:14:17.431 fused_ordering(315) 00:14:17.431 fused_ordering(316) 00:14:17.431 fused_ordering(317) 00:14:17.431 fused_ordering(318) 00:14:17.431 fused_ordering(319) 00:14:17.431 fused_ordering(320) 00:14:17.431 fused_ordering(321) 00:14:17.431 fused_ordering(322) 00:14:17.431 fused_ordering(323) 00:14:17.431 fused_ordering(324) 00:14:17.431 fused_ordering(325) 00:14:17.431 fused_ordering(326) 00:14:17.431 fused_ordering(327) 00:14:17.431 fused_ordering(328) 00:14:17.431 fused_ordering(329) 00:14:17.431 fused_ordering(330) 00:14:17.431 fused_ordering(331) 00:14:17.431 fused_ordering(332) 00:14:17.431 fused_ordering(333) 00:14:17.431 fused_ordering(334) 00:14:17.431 fused_ordering(335) 00:14:17.432 fused_ordering(336) 00:14:17.432 fused_ordering(337) 00:14:17.432 fused_ordering(338) 00:14:17.432 fused_ordering(339) 00:14:17.432 fused_ordering(340) 00:14:17.432 fused_ordering(341) 00:14:17.432 fused_ordering(342) 00:14:17.432 fused_ordering(343) 00:14:17.432 fused_ordering(344) 00:14:17.432 fused_ordering(345) 00:14:17.432 fused_ordering(346) 00:14:17.432 fused_ordering(347) 00:14:17.432 fused_ordering(348) 00:14:17.432 fused_ordering(349) 00:14:17.432 fused_ordering(350) 00:14:17.432 fused_ordering(351) 00:14:17.432 fused_ordering(352) 00:14:17.432 fused_ordering(353) 00:14:17.432 fused_ordering(354) 00:14:17.432 fused_ordering(355) 00:14:17.432 fused_ordering(356) 00:14:17.432 fused_ordering(357) 00:14:17.432 fused_ordering(358) 00:14:17.432 fused_ordering(359) 00:14:17.432 fused_ordering(360) 00:14:17.432 fused_ordering(361) 00:14:17.432 fused_ordering(362) 00:14:17.432 fused_ordering(363) 00:14:17.432 fused_ordering(364) 00:14:17.432 fused_ordering(365) 00:14:17.432 fused_ordering(366) 00:14:17.432 fused_ordering(367) 00:14:17.432 fused_ordering(368) 00:14:17.432 fused_ordering(369) 00:14:17.432 fused_ordering(370) 00:14:17.432 fused_ordering(371) 00:14:17.432 fused_ordering(372) 00:14:17.432 fused_ordering(373) 00:14:17.432 fused_ordering(374) 00:14:17.432 fused_ordering(375) 00:14:17.432 fused_ordering(376) 00:14:17.432 fused_ordering(377) 00:14:17.432 fused_ordering(378) 00:14:17.432 fused_ordering(379) 00:14:17.432 fused_ordering(380) 00:14:17.432 fused_ordering(381) 00:14:17.432 fused_ordering(382) 00:14:17.432 fused_ordering(383) 00:14:17.432 fused_ordering(384) 00:14:17.432 fused_ordering(385) 00:14:17.432 fused_ordering(386) 00:14:17.432 fused_ordering(387) 00:14:17.432 fused_ordering(388) 00:14:17.432 fused_ordering(389) 00:14:17.432 fused_ordering(390) 00:14:17.432 fused_ordering(391) 00:14:17.432 fused_ordering(392) 00:14:17.432 fused_ordering(393) 00:14:17.432 fused_ordering(394) 00:14:17.432 fused_ordering(395) 00:14:17.432 fused_ordering(396) 00:14:17.432 fused_ordering(397) 00:14:17.432 fused_ordering(398) 00:14:17.432 fused_ordering(399) 00:14:17.432 fused_ordering(400) 00:14:17.432 fused_ordering(401) 00:14:17.432 fused_ordering(402) 00:14:17.432 fused_ordering(403) 00:14:17.432 fused_ordering(404) 00:14:17.432 fused_ordering(405) 00:14:17.432 fused_ordering(406) 00:14:17.432 fused_ordering(407) 00:14:17.432 fused_ordering(408) 00:14:17.432 fused_ordering(409) 00:14:17.432 fused_ordering(410) 00:14:17.691 fused_ordering(411) 00:14:17.691 fused_ordering(412) 00:14:17.691 fused_ordering(413) 00:14:17.691 fused_ordering(414) 00:14:17.691 fused_ordering(415) 00:14:17.691 fused_ordering(416) 00:14:17.691 fused_ordering(417) 00:14:17.691 fused_ordering(418) 00:14:17.691 fused_ordering(419) 00:14:17.691 fused_ordering(420) 00:14:17.691 fused_ordering(421) 00:14:17.691 fused_ordering(422) 00:14:17.691 fused_ordering(423) 00:14:17.691 fused_ordering(424) 00:14:17.691 fused_ordering(425) 00:14:17.691 fused_ordering(426) 00:14:17.691 fused_ordering(427) 00:14:17.691 fused_ordering(428) 00:14:17.691 fused_ordering(429) 00:14:17.691 fused_ordering(430) 00:14:17.691 fused_ordering(431) 00:14:17.691 fused_ordering(432) 00:14:17.691 fused_ordering(433) 00:14:17.691 fused_ordering(434) 00:14:17.691 fused_ordering(435) 00:14:17.691 fused_ordering(436) 00:14:17.691 fused_ordering(437) 00:14:17.691 fused_ordering(438) 00:14:17.691 fused_ordering(439) 00:14:17.691 fused_ordering(440) 00:14:17.691 fused_ordering(441) 00:14:17.691 fused_ordering(442) 00:14:17.691 fused_ordering(443) 00:14:17.691 fused_ordering(444) 00:14:17.691 fused_ordering(445) 00:14:17.691 fused_ordering(446) 00:14:17.691 fused_ordering(447) 00:14:17.691 fused_ordering(448) 00:14:17.691 fused_ordering(449) 00:14:17.691 fused_ordering(450) 00:14:17.691 fused_ordering(451) 00:14:17.691 fused_ordering(452) 00:14:17.691 fused_ordering(453) 00:14:17.691 fused_ordering(454) 00:14:17.691 fused_ordering(455) 00:14:17.691 fused_ordering(456) 00:14:17.691 fused_ordering(457) 00:14:17.691 fused_ordering(458) 00:14:17.691 fused_ordering(459) 00:14:17.691 fused_ordering(460) 00:14:17.691 fused_ordering(461) 00:14:17.691 fused_ordering(462) 00:14:17.691 fused_ordering(463) 00:14:17.691 fused_ordering(464) 00:14:17.691 fused_ordering(465) 00:14:17.691 fused_ordering(466) 00:14:17.691 fused_ordering(467) 00:14:17.691 fused_ordering(468) 00:14:17.691 fused_ordering(469) 00:14:17.691 fused_ordering(470) 00:14:17.691 fused_ordering(471) 00:14:17.691 fused_ordering(472) 00:14:17.691 fused_ordering(473) 00:14:17.691 fused_ordering(474) 00:14:17.691 fused_ordering(475) 00:14:17.691 fused_ordering(476) 00:14:17.691 fused_ordering(477) 00:14:17.691 fused_ordering(478) 00:14:17.691 fused_ordering(479) 00:14:17.691 fused_ordering(480) 00:14:17.691 fused_ordering(481) 00:14:17.691 fused_ordering(482) 00:14:17.691 fused_ordering(483) 00:14:17.691 fused_ordering(484) 00:14:17.691 fused_ordering(485) 00:14:17.691 fused_ordering(486) 00:14:17.691 fused_ordering(487) 00:14:17.691 fused_ordering(488) 00:14:17.691 fused_ordering(489) 00:14:17.691 fused_ordering(490) 00:14:17.691 fused_ordering(491) 00:14:17.691 fused_ordering(492) 00:14:17.691 fused_ordering(493) 00:14:17.691 fused_ordering(494) 00:14:17.691 fused_ordering(495) 00:14:17.691 fused_ordering(496) 00:14:17.691 fused_ordering(497) 00:14:17.691 fused_ordering(498) 00:14:17.691 fused_ordering(499) 00:14:17.691 fused_ordering(500) 00:14:17.691 fused_ordering(501) 00:14:17.691 fused_ordering(502) 00:14:17.691 fused_ordering(503) 00:14:17.691 fused_ordering(504) 00:14:17.691 fused_ordering(505) 00:14:17.691 fused_ordering(506) 00:14:17.691 fused_ordering(507) 00:14:17.691 fused_ordering(508) 00:14:17.691 fused_ordering(509) 00:14:17.691 fused_ordering(510) 00:14:17.691 fused_ordering(511) 00:14:17.691 fused_ordering(512) 00:14:17.691 fused_ordering(513) 00:14:17.691 fused_ordering(514) 00:14:17.691 fused_ordering(515) 00:14:17.691 fused_ordering(516) 00:14:17.691 fused_ordering(517) 00:14:17.691 fused_ordering(518) 00:14:17.691 fused_ordering(519) 00:14:17.691 fused_ordering(520) 00:14:17.691 fused_ordering(521) 00:14:17.691 fused_ordering(522) 00:14:17.691 fused_ordering(523) 00:14:17.691 fused_ordering(524) 00:14:17.691 fused_ordering(525) 00:14:17.691 fused_ordering(526) 00:14:17.691 fused_ordering(527) 00:14:17.691 fused_ordering(528) 00:14:17.691 fused_ordering(529) 00:14:17.691 fused_ordering(530) 00:14:17.691 fused_ordering(531) 00:14:17.691 fused_ordering(532) 00:14:17.691 fused_ordering(533) 00:14:17.691 fused_ordering(534) 00:14:17.691 fused_ordering(535) 00:14:17.691 fused_ordering(536) 00:14:17.691 fused_ordering(537) 00:14:17.691 fused_ordering(538) 00:14:17.691 fused_ordering(539) 00:14:17.691 fused_ordering(540) 00:14:17.691 fused_ordering(541) 00:14:17.691 fused_ordering(542) 00:14:17.691 fused_ordering(543) 00:14:17.691 fused_ordering(544) 00:14:17.691 fused_ordering(545) 00:14:17.691 fused_ordering(546) 00:14:17.691 fused_ordering(547) 00:14:17.691 fused_ordering(548) 00:14:17.691 fused_ordering(549) 00:14:17.691 fused_ordering(550) 00:14:17.691 fused_ordering(551) 00:14:17.691 fused_ordering(552) 00:14:17.691 fused_ordering(553) 00:14:17.691 fused_ordering(554) 00:14:17.691 fused_ordering(555) 00:14:17.691 fused_ordering(556) 00:14:17.691 fused_ordering(557) 00:14:17.691 fused_ordering(558) 00:14:17.691 fused_ordering(559) 00:14:17.691 fused_ordering(560) 00:14:17.691 fused_ordering(561) 00:14:17.691 fused_ordering(562) 00:14:17.691 fused_ordering(563) 00:14:17.691 fused_ordering(564) 00:14:17.691 fused_ordering(565) 00:14:17.691 fused_ordering(566) 00:14:17.691 fused_ordering(567) 00:14:17.691 fused_ordering(568) 00:14:17.691 fused_ordering(569) 00:14:17.691 fused_ordering(570) 00:14:17.691 fused_ordering(571) 00:14:17.691 fused_ordering(572) 00:14:17.691 fused_ordering(573) 00:14:17.691 fused_ordering(574) 00:14:17.691 fused_ordering(575) 00:14:17.691 fused_ordering(576) 00:14:17.691 fused_ordering(577) 00:14:17.691 fused_ordering(578) 00:14:17.691 fused_ordering(579) 00:14:17.691 fused_ordering(580) 00:14:17.691 fused_ordering(581) 00:14:17.691 fused_ordering(582) 00:14:17.691 fused_ordering(583) 00:14:17.691 fused_ordering(584) 00:14:17.691 fused_ordering(585) 00:14:17.691 fused_ordering(586) 00:14:17.691 fused_ordering(587) 00:14:17.691 fused_ordering(588) 00:14:17.691 fused_ordering(589) 00:14:17.691 fused_ordering(590) 00:14:17.691 fused_ordering(591) 00:14:17.691 fused_ordering(592) 00:14:17.691 fused_ordering(593) 00:14:17.691 fused_ordering(594) 00:14:17.691 fused_ordering(595) 00:14:17.691 fused_ordering(596) 00:14:17.691 fused_ordering(597) 00:14:17.691 fused_ordering(598) 00:14:17.691 fused_ordering(599) 00:14:17.691 fused_ordering(600) 00:14:17.691 fused_ordering(601) 00:14:17.691 fused_ordering(602) 00:14:17.691 fused_ordering(603) 00:14:17.691 fused_ordering(604) 00:14:17.691 fused_ordering(605) 00:14:17.691 fused_ordering(606) 00:14:17.691 fused_ordering(607) 00:14:17.691 fused_ordering(608) 00:14:17.691 fused_ordering(609) 00:14:17.691 fused_ordering(610) 00:14:17.691 fused_ordering(611) 00:14:17.691 fused_ordering(612) 00:14:17.691 fused_ordering(613) 00:14:17.691 fused_ordering(614) 00:14:17.691 fused_ordering(615) 00:14:17.950 fused_ordering(616) 00:14:17.950 fused_ordering(617) 00:14:17.950 fused_ordering(618) 00:14:17.950 fused_ordering(619) 00:14:17.950 fused_ordering(620) 00:14:17.950 fused_ordering(621) 00:14:17.950 fused_ordering(622) 00:14:17.950 fused_ordering(623) 00:14:17.950 fused_ordering(624) 00:14:17.950 fused_ordering(625) 00:14:17.950 fused_ordering(626) 00:14:17.950 fused_ordering(627) 00:14:17.950 fused_ordering(628) 00:14:17.950 fused_ordering(629) 00:14:17.950 fused_ordering(630) 00:14:17.950 fused_ordering(631) 00:14:17.950 fused_ordering(632) 00:14:17.950 fused_ordering(633) 00:14:17.950 fused_ordering(634) 00:14:17.950 fused_ordering(635) 00:14:17.950 fused_ordering(636) 00:14:17.950 fused_ordering(637) 00:14:17.951 fused_ordering(638) 00:14:17.951 fused_ordering(639) 00:14:17.951 fused_ordering(640) 00:14:17.951 fused_ordering(641) 00:14:17.951 fused_ordering(642) 00:14:17.951 fused_ordering(643) 00:14:17.951 fused_ordering(644) 00:14:17.951 fused_ordering(645) 00:14:17.951 fused_ordering(646) 00:14:17.951 fused_ordering(647) 00:14:17.951 fused_ordering(648) 00:14:17.951 fused_ordering(649) 00:14:17.951 fused_ordering(650) 00:14:17.951 fused_ordering(651) 00:14:17.951 fused_ordering(652) 00:14:17.951 fused_ordering(653) 00:14:17.951 fused_ordering(654) 00:14:17.951 fused_ordering(655) 00:14:17.951 fused_ordering(656) 00:14:17.951 fused_ordering(657) 00:14:17.951 fused_ordering(658) 00:14:17.951 fused_ordering(659) 00:14:17.951 fused_ordering(660) 00:14:17.951 fused_ordering(661) 00:14:17.951 fused_ordering(662) 00:14:17.951 fused_ordering(663) 00:14:17.951 fused_ordering(664) 00:14:17.951 fused_ordering(665) 00:14:17.951 fused_ordering(666) 00:14:17.951 fused_ordering(667) 00:14:17.951 fused_ordering(668) 00:14:17.951 fused_ordering(669) 00:14:17.951 fused_ordering(670) 00:14:17.951 fused_ordering(671) 00:14:17.951 fused_ordering(672) 00:14:17.951 fused_ordering(673) 00:14:17.951 fused_ordering(674) 00:14:17.951 fused_ordering(675) 00:14:17.951 fused_ordering(676) 00:14:17.951 fused_ordering(677) 00:14:17.951 fused_ordering(678) 00:14:17.951 fused_ordering(679) 00:14:17.951 fused_ordering(680) 00:14:17.951 fused_ordering(681) 00:14:17.951 fused_ordering(682) 00:14:17.951 fused_ordering(683) 00:14:17.951 fused_ordering(684) 00:14:17.951 fused_ordering(685) 00:14:17.951 fused_ordering(686) 00:14:17.951 fused_ordering(687) 00:14:17.951 fused_ordering(688) 00:14:17.951 fused_ordering(689) 00:14:17.951 fused_ordering(690) 00:14:17.951 fused_ordering(691) 00:14:17.951 fused_ordering(692) 00:14:17.951 fused_ordering(693) 00:14:17.951 fused_ordering(694) 00:14:17.951 fused_ordering(695) 00:14:17.951 fused_ordering(696) 00:14:17.951 fused_ordering(697) 00:14:17.951 fused_ordering(698) 00:14:17.951 fused_ordering(699) 00:14:17.951 fused_ordering(700) 00:14:17.951 fused_ordering(701) 00:14:17.951 fused_ordering(702) 00:14:17.951 fused_ordering(703) 00:14:17.951 fused_ordering(704) 00:14:17.951 fused_ordering(705) 00:14:17.951 fused_ordering(706) 00:14:17.951 fused_ordering(707) 00:14:17.951 fused_ordering(708) 00:14:17.951 fused_ordering(709) 00:14:17.951 fused_ordering(710) 00:14:17.951 fused_ordering(711) 00:14:17.951 fused_ordering(712) 00:14:17.951 fused_ordering(713) 00:14:17.951 fused_ordering(714) 00:14:17.951 fused_ordering(715) 00:14:17.951 fused_ordering(716) 00:14:17.951 fused_ordering(717) 00:14:17.951 fused_ordering(718) 00:14:17.951 fused_ordering(719) 00:14:17.951 fused_ordering(720) 00:14:17.951 fused_ordering(721) 00:14:17.951 fused_ordering(722) 00:14:17.951 fused_ordering(723) 00:14:17.951 fused_ordering(724) 00:14:17.951 fused_ordering(725) 00:14:17.951 fused_ordering(726) 00:14:17.951 fused_ordering(727) 00:14:17.951 fused_ordering(728) 00:14:17.951 fused_ordering(729) 00:14:17.951 fused_ordering(730) 00:14:17.951 fused_ordering(731) 00:14:17.951 fused_ordering(732) 00:14:17.951 fused_ordering(733) 00:14:17.951 fused_ordering(734) 00:14:17.951 fused_ordering(735) 00:14:17.951 fused_ordering(736) 00:14:17.951 fused_ordering(737) 00:14:17.951 fused_ordering(738) 00:14:17.951 fused_ordering(739) 00:14:17.951 fused_ordering(740) 00:14:17.951 fused_ordering(741) 00:14:17.951 fused_ordering(742) 00:14:17.951 fused_ordering(743) 00:14:17.951 fused_ordering(744) 00:14:17.951 fused_ordering(745) 00:14:17.951 fused_ordering(746) 00:14:17.951 fused_ordering(747) 00:14:17.951 fused_ordering(748) 00:14:17.951 fused_ordering(749) 00:14:17.951 fused_ordering(750) 00:14:17.951 fused_ordering(751) 00:14:17.951 fused_ordering(752) 00:14:17.951 fused_ordering(753) 00:14:17.951 fused_ordering(754) 00:14:17.951 fused_ordering(755) 00:14:17.951 fused_ordering(756) 00:14:17.951 fused_ordering(757) 00:14:17.951 fused_ordering(758) 00:14:17.951 fused_ordering(759) 00:14:17.951 fused_ordering(760) 00:14:17.951 fused_ordering(761) 00:14:17.951 fused_ordering(762) 00:14:17.951 fused_ordering(763) 00:14:17.951 fused_ordering(764) 00:14:17.951 fused_ordering(765) 00:14:17.951 fused_ordering(766) 00:14:17.951 fused_ordering(767) 00:14:17.951 fused_ordering(768) 00:14:17.951 fused_ordering(769) 00:14:17.951 fused_ordering(770) 00:14:17.951 fused_ordering(771) 00:14:17.951 fused_ordering(772) 00:14:17.951 fused_ordering(773) 00:14:17.951 fused_ordering(774) 00:14:17.951 fused_ordering(775) 00:14:17.951 fused_ordering(776) 00:14:17.951 fused_ordering(777) 00:14:17.951 fused_ordering(778) 00:14:17.951 fused_ordering(779) 00:14:17.951 fused_ordering(780) 00:14:17.951 fused_ordering(781) 00:14:17.951 fused_ordering(782) 00:14:17.951 fused_ordering(783) 00:14:17.951 fused_ordering(784) 00:14:17.951 fused_ordering(785) 00:14:17.951 fused_ordering(786) 00:14:17.951 fused_ordering(787) 00:14:17.951 fused_ordering(788) 00:14:17.951 fused_ordering(789) 00:14:17.951 fused_ordering(790) 00:14:17.951 fused_ordering(791) 00:14:17.951 fused_ordering(792) 00:14:17.951 fused_ordering(793) 00:14:17.951 fused_ordering(794) 00:14:17.951 fused_ordering(795) 00:14:17.951 fused_ordering(796) 00:14:17.951 fused_ordering(797) 00:14:17.951 fused_ordering(798) 00:14:17.951 fused_ordering(799) 00:14:17.951 fused_ordering(800) 00:14:17.951 fused_ordering(801) 00:14:17.951 fused_ordering(802) 00:14:17.951 fused_ordering(803) 00:14:17.951 fused_ordering(804) 00:14:17.951 fused_ordering(805) 00:14:17.951 fused_ordering(806) 00:14:17.951 fused_ordering(807) 00:14:17.951 fused_ordering(808) 00:14:17.951 fused_ordering(809) 00:14:17.951 fused_ordering(810) 00:14:17.951 fused_ordering(811) 00:14:17.951 fused_ordering(812) 00:14:17.951 fused_ordering(813) 00:14:17.951 fused_ordering(814) 00:14:17.951 fused_ordering(815) 00:14:17.951 fused_ordering(816) 00:14:17.951 fused_ordering(817) 00:14:17.951 fused_ordering(818) 00:14:17.951 fused_ordering(819) 00:14:17.951 fused_ordering(820) 00:14:18.519 fused_ordering(821) 00:14:18.519 fused_ordering(822) 00:14:18.519 fused_ordering(823) 00:14:18.519 fused_ordering(824) 00:14:18.519 fused_ordering(825) 00:14:18.519 fused_ordering(826) 00:14:18.519 fused_ordering(827) 00:14:18.519 fused_ordering(828) 00:14:18.519 fused_ordering(829) 00:14:18.519 fused_ordering(830) 00:14:18.519 fused_ordering(831) 00:14:18.519 fused_ordering(832) 00:14:18.519 fused_ordering(833) 00:14:18.519 fused_ordering(834) 00:14:18.519 fused_ordering(835) 00:14:18.519 fused_ordering(836) 00:14:18.519 fused_ordering(837) 00:14:18.519 fused_ordering(838) 00:14:18.519 fused_ordering(839) 00:14:18.519 fused_ordering(840) 00:14:18.519 fused_ordering(841) 00:14:18.519 fused_ordering(842) 00:14:18.519 fused_ordering(843) 00:14:18.519 fused_ordering(844) 00:14:18.519 fused_ordering(845) 00:14:18.519 fused_ordering(846) 00:14:18.519 fused_ordering(847) 00:14:18.519 fused_ordering(848) 00:14:18.519 fused_ordering(849) 00:14:18.519 fused_ordering(850) 00:14:18.519 fused_ordering(851) 00:14:18.519 fused_ordering(852) 00:14:18.519 fused_ordering(853) 00:14:18.519 fused_ordering(854) 00:14:18.519 fused_ordering(855) 00:14:18.519 fused_ordering(856) 00:14:18.519 fused_ordering(857) 00:14:18.519 fused_ordering(858) 00:14:18.519 fused_ordering(859) 00:14:18.519 fused_ordering(860) 00:14:18.519 fused_ordering(861) 00:14:18.519 fused_ordering(862) 00:14:18.519 fused_ordering(863) 00:14:18.519 fused_ordering(864) 00:14:18.519 fused_ordering(865) 00:14:18.519 fused_ordering(866) 00:14:18.519 fused_ordering(867) 00:14:18.519 fused_ordering(868) 00:14:18.519 fused_ordering(869) 00:14:18.519 fused_ordering(870) 00:14:18.519 fused_ordering(871) 00:14:18.519 fused_ordering(872) 00:14:18.519 fused_ordering(873) 00:14:18.519 fused_ordering(874) 00:14:18.519 fused_ordering(875) 00:14:18.519 fused_ordering(876) 00:14:18.519 fused_ordering(877) 00:14:18.519 fused_ordering(878) 00:14:18.519 fused_ordering(879) 00:14:18.519 fused_ordering(880) 00:14:18.519 fused_ordering(881) 00:14:18.519 fused_ordering(882) 00:14:18.519 fused_ordering(883) 00:14:18.519 fused_ordering(884) 00:14:18.519 fused_ordering(885) 00:14:18.519 fused_ordering(886) 00:14:18.519 fused_ordering(887) 00:14:18.519 fused_ordering(888) 00:14:18.519 fused_ordering(889) 00:14:18.519 fused_ordering(890) 00:14:18.519 fused_ordering(891) 00:14:18.519 fused_ordering(892) 00:14:18.519 fused_ordering(893) 00:14:18.519 fused_ordering(894) 00:14:18.519 fused_ordering(895) 00:14:18.519 fused_ordering(896) 00:14:18.519 fused_ordering(897) 00:14:18.519 fused_ordering(898) 00:14:18.519 fused_ordering(899) 00:14:18.519 fused_ordering(900) 00:14:18.519 fused_ordering(901) 00:14:18.519 fused_ordering(902) 00:14:18.519 fused_ordering(903) 00:14:18.519 fused_ordering(904) 00:14:18.519 fused_ordering(905) 00:14:18.519 fused_ordering(906) 00:14:18.519 fused_ordering(907) 00:14:18.519 fused_ordering(908) 00:14:18.519 fused_ordering(909) 00:14:18.519 fused_ordering(910) 00:14:18.519 fused_ordering(911) 00:14:18.519 fused_ordering(912) 00:14:18.519 fused_ordering(913) 00:14:18.519 fused_ordering(914) 00:14:18.519 fused_ordering(915) 00:14:18.519 fused_ordering(916) 00:14:18.519 fused_ordering(917) 00:14:18.519 fused_ordering(918) 00:14:18.519 fused_ordering(919) 00:14:18.519 fused_ordering(920) 00:14:18.519 fused_ordering(921) 00:14:18.519 fused_ordering(922) 00:14:18.519 fused_ordering(923) 00:14:18.519 fused_ordering(924) 00:14:18.519 fused_ordering(925) 00:14:18.519 fused_ordering(926) 00:14:18.520 fused_ordering(927) 00:14:18.520 fused_ordering(928) 00:14:18.520 fused_ordering(929) 00:14:18.520 fused_ordering(930) 00:14:18.520 fused_ordering(931) 00:14:18.520 fused_ordering(932) 00:14:18.520 fused_ordering(933) 00:14:18.520 fused_ordering(934) 00:14:18.520 fused_ordering(935) 00:14:18.520 fused_ordering(936) 00:14:18.520 fused_ordering(937) 00:14:18.520 fused_ordering(938) 00:14:18.520 fused_ordering(939) 00:14:18.520 fused_ordering(940) 00:14:18.520 fused_ordering(941) 00:14:18.520 fused_ordering(942) 00:14:18.520 fused_ordering(943) 00:14:18.520 fused_ordering(944) 00:14:18.520 fused_ordering(945) 00:14:18.520 fused_ordering(946) 00:14:18.520 fused_ordering(947) 00:14:18.520 fused_ordering(948) 00:14:18.520 fused_ordering(949) 00:14:18.520 fused_ordering(950) 00:14:18.520 fused_ordering(951) 00:14:18.520 fused_ordering(952) 00:14:18.520 fused_ordering(953) 00:14:18.520 fused_ordering(954) 00:14:18.520 fused_ordering(955) 00:14:18.520 fused_ordering(956) 00:14:18.520 fused_ordering(957) 00:14:18.520 fused_ordering(958) 00:14:18.520 fused_ordering(959) 00:14:18.520 fused_ordering(960) 00:14:18.520 fused_ordering(961) 00:14:18.520 fused_ordering(962) 00:14:18.520 fused_ordering(963) 00:14:18.520 fused_ordering(964) 00:14:18.520 fused_ordering(965) 00:14:18.520 fused_ordering(966) 00:14:18.520 fused_ordering(967) 00:14:18.520 fused_ordering(968) 00:14:18.520 fused_ordering(969) 00:14:18.520 fused_ordering(970) 00:14:18.520 fused_ordering(971) 00:14:18.520 fused_ordering(972) 00:14:18.520 fused_ordering(973) 00:14:18.520 fused_ordering(974) 00:14:18.520 fused_ordering(975) 00:14:18.520 fused_ordering(976) 00:14:18.520 fused_ordering(977) 00:14:18.520 fused_ordering(978) 00:14:18.520 fused_ordering(979) 00:14:18.520 fused_ordering(980) 00:14:18.520 fused_ordering(981) 00:14:18.520 fused_ordering(982) 00:14:18.520 fused_ordering(983) 00:14:18.520 fused_ordering(984) 00:14:18.520 fused_ordering(985) 00:14:18.520 fused_ordering(986) 00:14:18.520 fused_ordering(987) 00:14:18.520 fused_ordering(988) 00:14:18.520 fused_ordering(989) 00:14:18.520 fused_ordering(990) 00:14:18.520 fused_ordering(991) 00:14:18.520 fused_ordering(992) 00:14:18.520 fused_ordering(993) 00:14:18.520 fused_ordering(994) 00:14:18.520 fused_ordering(995) 00:14:18.520 fused_ordering(996) 00:14:18.520 fused_ordering(997) 00:14:18.520 fused_ordering(998) 00:14:18.520 fused_ordering(999) 00:14:18.520 fused_ordering(1000) 00:14:18.520 fused_ordering(1001) 00:14:18.520 fused_ordering(1002) 00:14:18.520 fused_ordering(1003) 00:14:18.520 fused_ordering(1004) 00:14:18.520 fused_ordering(1005) 00:14:18.520 fused_ordering(1006) 00:14:18.520 fused_ordering(1007) 00:14:18.520 fused_ordering(1008) 00:14:18.520 fused_ordering(1009) 00:14:18.520 fused_ordering(1010) 00:14:18.520 fused_ordering(1011) 00:14:18.520 fused_ordering(1012) 00:14:18.520 fused_ordering(1013) 00:14:18.520 fused_ordering(1014) 00:14:18.520 fused_ordering(1015) 00:14:18.520 fused_ordering(1016) 00:14:18.520 fused_ordering(1017) 00:14:18.520 fused_ordering(1018) 00:14:18.520 fused_ordering(1019) 00:14:18.520 fused_ordering(1020) 00:14:18.520 fused_ordering(1021) 00:14:18.520 fused_ordering(1022) 00:14:18.520 fused_ordering(1023) 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:18.520 rmmod nvme_tcp 00:14:18.520 rmmod nvme_fabrics 00:14:18.520 rmmod nvme_keyring 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 2386970 ']' 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 2386970 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2386970 ']' 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2386970 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2386970 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2386970' 00:14:18.520 killing process with pid 2386970 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2386970 00:14:18.520 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2386970 00:14:18.779 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:18.779 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:18.779 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:18.779 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:18.779 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:14:18.779 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:18.779 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:14:18.779 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:18.779 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:18.779 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.779 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.779 15:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.316 15:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:21.316 00:14:21.316 real 0m11.276s 00:14:21.316 user 0m5.667s 00:14:21.316 sys 0m5.835s 00:14:21.316 15:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:21.316 15:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:21.316 ************************************ 00:14:21.316 END TEST nvmf_fused_ordering 00:14:21.316 ************************************ 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.316 ************************************ 00:14:21.316 START TEST nvmf_ns_masking 00:14:21.316 ************************************ 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:21.316 * Looking for test storage... 00:14:21.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:21.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.316 --rc genhtml_branch_coverage=1 00:14:21.316 --rc genhtml_function_coverage=1 00:14:21.316 --rc genhtml_legend=1 00:14:21.316 --rc geninfo_all_blocks=1 00:14:21.316 --rc geninfo_unexecuted_blocks=1 00:14:21.316 00:14:21.316 ' 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:21.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.316 --rc genhtml_branch_coverage=1 00:14:21.316 --rc genhtml_function_coverage=1 00:14:21.316 --rc genhtml_legend=1 00:14:21.316 --rc geninfo_all_blocks=1 00:14:21.316 --rc geninfo_unexecuted_blocks=1 00:14:21.316 00:14:21.316 ' 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:21.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.316 --rc genhtml_branch_coverage=1 00:14:21.316 --rc genhtml_function_coverage=1 00:14:21.316 --rc genhtml_legend=1 00:14:21.316 --rc geninfo_all_blocks=1 00:14:21.316 --rc geninfo_unexecuted_blocks=1 00:14:21.316 00:14:21.316 ' 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:21.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.316 --rc genhtml_branch_coverage=1 00:14:21.316 --rc genhtml_function_coverage=1 00:14:21.316 --rc genhtml_legend=1 00:14:21.316 --rc geninfo_all_blocks=1 00:14:21.316 --rc geninfo_unexecuted_blocks=1 00:14:21.316 00:14:21.316 ' 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.316 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:21.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=25a4ee58-1fa5-4007-a6e5-b694bfe69aac 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=13cb136d-8f33-4de7-b1db-1b659b010180 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7e71893b-f32f-445e-a900-f3b4b5f576e0 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:21.317 15:48:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.886 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:27.887 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:27.887 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:27.887 Found net devices under 0000:86:00.0: cvl_0_0 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:27.887 Found net devices under 0000:86:00.1: cvl_0_1 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.887 15:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.887 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.887 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.887 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:27.887 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.887 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.887 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.887 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:27.887 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:27.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:14:27.887 00:14:27.887 --- 10.0.0.2 ping statistics --- 00:14:27.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.887 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:14:27.887 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:14:27.887 00:14:27.887 --- 10.0.0.1 ping statistics --- 00:14:27.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.887 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:14:27.887 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.887 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:14:27.887 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:27.887 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.887 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=2390985 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 2390985 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2390985 ']' 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:27.888 15:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:27.888 [2024-10-01 15:48:37.296157] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:14:27.888 [2024-10-01 15:48:37.296200] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.888 [2024-10-01 15:48:37.367061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.888 [2024-10-01 15:48:37.443582] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.888 [2024-10-01 15:48:37.443618] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.888 [2024-10-01 15:48:37.443625] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.888 [2024-10-01 15:48:37.443631] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.888 [2024-10-01 15:48:37.443636] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.888 [2024-10-01 15:48:37.443659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.147 15:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.147 15:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:28.147 15:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:28.147 15:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:28.147 15:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:28.147 15:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.147 15:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:28.406 [2024-10-01 15:48:38.344380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.406 15:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:28.406 15:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:28.406 15:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:28.406 Malloc1 00:14:28.406 15:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:28.664 Malloc2 00:14:28.664 15:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:28.926 15:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:29.185 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.185 [2024-10-01 15:48:39.323640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.185 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:29.185 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7e71893b-f32f-445e-a900-f3b4b5f576e0 -a 10.0.0.2 -s 4420 -i 4 00:14:29.443 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:29.443 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:29.443 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:29.443 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:29.443 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:31.345 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:31.345 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:31.345 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:31.345 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:31.345 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:31.345 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:31.345 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:31.345 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:31.604 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:31.604 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:31.604 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:31.604 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.604 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:31.604 [ 0]:0x1 00:14:31.604 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.604 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.604 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1dd0337c2c8c4b85a65b58f692c6e8cc 00:14:31.604 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1dd0337c2c8c4b85a65b58f692c6e8cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.604 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:31.862 [ 0]:0x1 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1dd0337c2c8c4b85a65b58f692c6e8cc 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1dd0337c2c8c4b85a65b58f692c6e8cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:31.862 [ 1]:0x2 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d77b51217d964a3a8a1943da247666f0 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d77b51217d964a3a8a1943da247666f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:31.862 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:31.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.862 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.120 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:32.378 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:32.378 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7e71893b-f32f-445e-a900-f3b4b5f576e0 -a 10.0.0.2 -s 4420 -i 4 00:14:32.636 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:32.636 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:32.636 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.636 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:32.636 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:32.636 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:34.538 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:34.539 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:34.539 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.539 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:34.539 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:34.539 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:34.539 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:34.539 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:34.539 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:34.539 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:34.539 [ 0]:0x2 00:14:34.539 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:34.539 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:34.797 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d77b51217d964a3a8a1943da247666f0 00:14:34.797 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d77b51217d964a3a8a1943da247666f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.797 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:34.797 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:34.797 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:34.797 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:34.797 [ 0]:0x1 00:14:34.797 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:34.797 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.056 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1dd0337c2c8c4b85a65b58f692c6e8cc 00:14:35.056 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1dd0337c2c8c4b85a65b58f692c6e8cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.056 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:35.056 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.056 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:35.056 [ 1]:0x2 00:14:35.056 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:35.056 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.056 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d77b51217d964a3a8a1943da247666f0 00:14:35.056 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d77b51217d964a3a8a1943da247666f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.056 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:35.315 [ 0]:0x2 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d77b51217d964a3a8a1943da247666f0 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d77b51217d964a3a8a1943da247666f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:35.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.315 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:35.573 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:35.573 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7e71893b-f32f-445e-a900-f3b4b5f576e0 -a 10.0.0.2 -s 4420 -i 4 00:14:35.832 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:35.832 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:35.832 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:35.832 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:35.832 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:35.832 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:37.734 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:37.735 [ 0]:0x1 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1dd0337c2c8c4b85a65b58f692c6e8cc 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1dd0337c2c8c4b85a65b58f692c6e8cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:37.735 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:37.994 [ 1]:0x2 00:14:37.994 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:37.994 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:37.994 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d77b51217d964a3a8a1943da247666f0 00:14:37.994 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d77b51217d964a3a8a1943da247666f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.994 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:38.253 [ 0]:0x2 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d77b51217d964a3a8a1943da247666f0 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d77b51217d964a3a8a1943da247666f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:38.253 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:38.512 [2024-10-01 15:48:48.497676] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:38.512 request: 00:14:38.512 { 00:14:38.512 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.512 "nsid": 2, 00:14:38.512 "host": "nqn.2016-06.io.spdk:host1", 00:14:38.512 "method": "nvmf_ns_remove_host", 00:14:38.512 "req_id": 1 00:14:38.512 } 00:14:38.512 Got JSON-RPC error response 00:14:38.512 response: 00:14:38.512 { 00:14:38.512 "code": -32602, 00:14:38.512 "message": "Invalid parameters" 00:14:38.512 } 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:38.512 [ 0]:0x2 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d77b51217d964a3a8a1943da247666f0 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d77b51217d964a3a8a1943da247666f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:38.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2392997 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2392997 /var/tmp/host.sock 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2392997 ']' 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:38.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.512 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:38.771 [2024-10-01 15:48:48.715368] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:14:38.771 [2024-10-01 15:48:48.715415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392997 ] 00:14:38.771 [2024-10-01 15:48:48.784402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.771 [2024-10-01 15:48:48.857296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.704 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.704 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:39.704 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.704 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:39.961 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 25a4ee58-1fa5-4007-a6e5-b694bfe69aac 00:14:39.961 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:14:39.961 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 25A4EE581FA54007A6E5B694BFE69AAC -i 00:14:40.219 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 13cb136d-8f33-4de7-b1db-1b659b010180 00:14:40.219 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:14:40.219 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 13CB136D8F334DE7B1DB1B659B010180 -i 00:14:40.219 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:40.477 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:40.735 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:40.735 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:40.994 nvme0n1 00:14:40.994 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:40.994 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:41.252 nvme1n2 00:14:41.252 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:41.252 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:41.252 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:41.252 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:41.252 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:41.510 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:41.510 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:41.510 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:41.510 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:41.769 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 25a4ee58-1fa5-4007-a6e5-b694bfe69aac == \2\5\a\4\e\e\5\8\-\1\f\a\5\-\4\0\0\7\-\a\6\e\5\-\b\6\9\4\b\f\e\6\9\a\a\c ]] 00:14:41.769 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:41.769 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:41.769 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:42.027 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 13cb136d-8f33-4de7-b1db-1b659b010180 == \1\3\c\b\1\3\6\d\-\8\f\3\3\-\4\d\e\7\-\b\1\d\b\-\1\b\6\5\9\b\0\1\0\1\8\0 ]] 00:14:42.027 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2392997 00:14:42.027 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2392997 ']' 00:14:42.027 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2392997 00:14:42.027 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:42.027 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.027 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2392997 00:14:42.027 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:42.027 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:42.027 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2392997' 00:14:42.027 killing process with pid 2392997 00:14:42.027 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2392997 00:14:42.027 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2392997 00:14:42.286 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.544 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:42.544 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:42.544 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:42.544 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:42.544 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:42.544 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:42.544 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:42.544 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:42.544 rmmod nvme_tcp 00:14:42.544 rmmod nvme_fabrics 00:14:42.544 rmmod nvme_keyring 00:14:42.544 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:42.544 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:42.544 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:42.544 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 2390985 ']' 00:14:42.544 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 2390985 00:14:42.545 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2390985 ']' 00:14:42.545 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2390985 00:14:42.545 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:42.545 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.545 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2390985 00:14:42.545 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:42.545 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:42.545 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2390985' 00:14:42.545 killing process with pid 2390985 00:14:42.545 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2390985 00:14:42.545 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2390985 00:14:42.803 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:42.803 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:42.803 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:42.803 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:42.803 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:14:42.803 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:42.803 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:14:42.803 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:42.803 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:42.803 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.803 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.803 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:45.336 00:14:45.336 real 0m23.984s 00:14:45.336 user 0m26.125s 00:14:45.336 sys 0m6.856s 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:45.336 ************************************ 00:14:45.336 END TEST nvmf_ns_masking 00:14:45.336 ************************************ 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:45.336 ************************************ 00:14:45.336 START TEST nvmf_nvme_cli 00:14:45.336 ************************************ 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:45.336 * Looking for test storage... 00:14:45.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:45.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.336 --rc genhtml_branch_coverage=1 00:14:45.336 --rc genhtml_function_coverage=1 00:14:45.336 --rc genhtml_legend=1 00:14:45.336 --rc geninfo_all_blocks=1 00:14:45.336 --rc geninfo_unexecuted_blocks=1 00:14:45.336 00:14:45.336 ' 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:45.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.336 --rc genhtml_branch_coverage=1 00:14:45.336 --rc genhtml_function_coverage=1 00:14:45.336 --rc genhtml_legend=1 00:14:45.336 --rc geninfo_all_blocks=1 00:14:45.336 --rc geninfo_unexecuted_blocks=1 00:14:45.336 00:14:45.336 ' 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:45.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.336 --rc genhtml_branch_coverage=1 00:14:45.336 --rc genhtml_function_coverage=1 00:14:45.336 --rc genhtml_legend=1 00:14:45.336 --rc geninfo_all_blocks=1 00:14:45.336 --rc geninfo_unexecuted_blocks=1 00:14:45.336 00:14:45.336 ' 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:45.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.336 --rc genhtml_branch_coverage=1 00:14:45.336 --rc genhtml_function_coverage=1 00:14:45.336 --rc genhtml_legend=1 00:14:45.336 --rc geninfo_all_blocks=1 00:14:45.336 --rc geninfo_unexecuted_blocks=1 00:14:45.336 00:14:45.336 ' 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.336 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:45.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:45.337 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:51.901 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:51.901 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:51.901 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:51.901 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:51.902 Found net devices under 0000:86:00.0: cvl_0_0 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:51.902 Found net devices under 0000:86:00.1: cvl_0_1 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:51.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:14:51.902 00:14:51.902 --- 10.0.0.2 ping statistics --- 00:14:51.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.902 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:14:51.902 00:14:51.902 --- 10.0.0.1 ping statistics --- 00:14:51.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.902 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=2397238 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 2397238 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2397238 ']' 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:51.902 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.902 [2024-10-01 15:49:01.359894] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:14:51.902 [2024-10-01 15:49:01.359940] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.902 [2024-10-01 15:49:01.430318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.902 [2024-10-01 15:49:01.509927] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.902 [2024-10-01 15:49:01.509965] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.902 [2024-10-01 15:49:01.509972] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.902 [2024-10-01 15:49:01.509978] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.902 [2024-10-01 15:49:01.509983] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.902 [2024-10-01 15:49:01.510061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.902 [2024-10-01 15:49:01.510110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.902 [2024-10-01 15:49:01.510568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.902 [2024-10-01 15:49:01.510569] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:52.161 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:52.161 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:52.161 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:52.161 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:52.161 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.161 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.161 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.161 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.161 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.161 [2024-10-01 15:49:02.244532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.161 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.161 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:52.161 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.161 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.161 Malloc0 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.162 Malloc1 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.162 [2024-10-01 15:49:02.330123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.162 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:52.420 00:14:52.420 Discovery Log Number of Records 2, Generation counter 2 00:14:52.420 =====Discovery Log Entry 0====== 00:14:52.420 trtype: tcp 00:14:52.420 adrfam: ipv4 00:14:52.420 subtype: current discovery subsystem 00:14:52.420 treq: not required 00:14:52.420 portid: 0 00:14:52.420 trsvcid: 4420 00:14:52.420 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:52.420 traddr: 10.0.0.2 00:14:52.420 eflags: explicit discovery connections, duplicate discovery information 00:14:52.420 sectype: none 00:14:52.420 =====Discovery Log Entry 1====== 00:14:52.420 trtype: tcp 00:14:52.420 adrfam: ipv4 00:14:52.420 subtype: nvme subsystem 00:14:52.420 treq: not required 00:14:52.420 portid: 0 00:14:52.420 trsvcid: 4420 00:14:52.420 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:52.420 traddr: 10.0.0.2 00:14:52.420 eflags: none 00:14:52.420 sectype: none 00:14:52.420 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:52.420 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:52.421 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:14:52.421 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:52.421 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:14:52.421 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:14:52.421 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:52.421 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:14:52.421 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:52.421 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:52.421 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:53.798 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:53.798 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:53.798 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:53.798 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:53.798 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:53.798 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:55.737 /dev/nvme0n2 ]] 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:55.737 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:55.738 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:55.738 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:55.738 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:55.738 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:55.738 rmmod nvme_tcp 00:14:55.997 rmmod nvme_fabrics 00:14:55.997 rmmod nvme_keyring 00:14:55.997 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:55.997 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:55.997 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:55.997 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 2397238 ']' 00:14:55.997 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 2397238 00:14:55.997 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2397238 ']' 00:14:55.997 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2397238 00:14:55.997 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:55.997 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.997 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2397238 00:14:55.997 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:55.997 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:55.997 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2397238' 00:14:55.997 killing process with pid 2397238 00:14:55.997 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2397238 00:14:55.997 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2397238 00:14:56.257 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:56.257 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:56.257 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:56.257 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:56.257 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:14:56.257 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:56.257 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:14:56.257 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:56.257 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:56.257 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.257 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.257 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.194 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:58.194 00:14:58.194 real 0m13.226s 00:14:58.194 user 0m20.685s 00:14:58.194 sys 0m5.161s 00:14:58.194 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.194 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.194 ************************************ 00:14:58.194 END TEST nvmf_nvme_cli 00:14:58.194 ************************************ 00:14:58.194 15:49:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:58.194 15:49:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:58.194 15:49:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:58.194 15:49:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:58.194 15:49:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:58.454 ************************************ 00:14:58.454 START TEST nvmf_vfio_user 00:14:58.454 ************************************ 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:58.454 * Looking for test storage... 00:14:58.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:58.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.454 --rc genhtml_branch_coverage=1 00:14:58.454 --rc genhtml_function_coverage=1 00:14:58.454 --rc genhtml_legend=1 00:14:58.454 --rc geninfo_all_blocks=1 00:14:58.454 --rc geninfo_unexecuted_blocks=1 00:14:58.454 00:14:58.454 ' 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:58.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.454 --rc genhtml_branch_coverage=1 00:14:58.454 --rc genhtml_function_coverage=1 00:14:58.454 --rc genhtml_legend=1 00:14:58.454 --rc geninfo_all_blocks=1 00:14:58.454 --rc geninfo_unexecuted_blocks=1 00:14:58.454 00:14:58.454 ' 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:58.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.454 --rc genhtml_branch_coverage=1 00:14:58.454 --rc genhtml_function_coverage=1 00:14:58.454 --rc genhtml_legend=1 00:14:58.454 --rc geninfo_all_blocks=1 00:14:58.454 --rc geninfo_unexecuted_blocks=1 00:14:58.454 00:14:58.454 ' 00:14:58.454 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:58.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.454 --rc genhtml_branch_coverage=1 00:14:58.454 --rc genhtml_function_coverage=1 00:14:58.454 --rc genhtml_legend=1 00:14:58.454 --rc geninfo_all_blocks=1 00:14:58.454 --rc geninfo_unexecuted_blocks=1 00:14:58.454 00:14:58.454 ' 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:58.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2398537 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2398537' 00:14:58.455 Process pid: 2398537 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2398537 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2398537 ']' 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.455 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:58.714 [2024-10-01 15:49:08.662030] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:14:58.714 [2024-10-01 15:49:08.662078] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.714 [2024-10-01 15:49:08.728325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.714 [2024-10-01 15:49:08.807584] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.714 [2024-10-01 15:49:08.807620] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.714 [2024-10-01 15:49:08.807627] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.714 [2024-10-01 15:49:08.807633] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.714 [2024-10-01 15:49:08.807638] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.714 [2024-10-01 15:49:08.807707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.714 [2024-10-01 15:49:08.807739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.714 [2024-10-01 15:49:08.807849] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.714 [2024-10-01 15:49:08.807850] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.651 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:59.651 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:59.651 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:00.587 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:00.587 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:00.587 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:00.587 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:00.587 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:00.587 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:00.846 Malloc1 00:15:00.846 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:01.104 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:01.363 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:01.363 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:01.363 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:01.363 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:01.622 Malloc2 00:15:01.622 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:01.881 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:02.139 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:02.401 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:02.401 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:02.401 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:02.401 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:02.401 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:02.401 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:02.401 [2024-10-01 15:49:12.401270] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:15:02.401 [2024-10-01 15:49:12.401306] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399244 ] 00:15:02.401 [2024-10-01 15:49:12.427563] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:02.401 [2024-10-01 15:49:12.432051] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:02.401 [2024-10-01 15:49:12.432073] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7faa488f7000 00:15:02.401 [2024-10-01 15:49:12.433055] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.401 [2024-10-01 15:49:12.434047] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.401 [2024-10-01 15:49:12.435060] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.401 [2024-10-01 15:49:12.436059] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:02.401 [2024-10-01 15:49:12.437064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:02.401 [2024-10-01 15:49:12.438070] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.401 [2024-10-01 15:49:12.439074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:02.401 [2024-10-01 15:49:12.440080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.401 [2024-10-01 15:49:12.441087] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:02.401 [2024-10-01 15:49:12.441096] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7faa488ec000 00:15:02.401 [2024-10-01 15:49:12.442156] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:02.401 [2024-10-01 15:49:12.456477] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:02.401 [2024-10-01 15:49:12.456501] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:02.401 [2024-10-01 15:49:12.461194] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:02.401 [2024-10-01 15:49:12.461234] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:02.401 [2024-10-01 15:49:12.461310] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:02.401 [2024-10-01 15:49:12.461326] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:02.401 [2024-10-01 15:49:12.461331] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:02.401 [2024-10-01 15:49:12.462193] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:02.401 [2024-10-01 15:49:12.462202] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:02.401 [2024-10-01 15:49:12.462208] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:02.401 [2024-10-01 15:49:12.463202] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:02.401 [2024-10-01 15:49:12.463209] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:02.401 [2024-10-01 15:49:12.463215] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:02.401 [2024-10-01 15:49:12.464205] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:02.401 [2024-10-01 15:49:12.464212] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:02.401 [2024-10-01 15:49:12.465213] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:02.401 [2024-10-01 15:49:12.465222] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:02.401 [2024-10-01 15:49:12.465226] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:02.401 [2024-10-01 15:49:12.465231] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:02.401 [2024-10-01 15:49:12.465336] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:02.402 [2024-10-01 15:49:12.465341] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:02.402 [2024-10-01 15:49:12.465345] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:02.402 [2024-10-01 15:49:12.466221] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:02.402 [2024-10-01 15:49:12.467226] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:02.402 [2024-10-01 15:49:12.468231] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:02.402 [2024-10-01 15:49:12.469233] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.402 [2024-10-01 15:49:12.469294] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:02.402 [2024-10-01 15:49:12.470244] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:02.402 [2024-10-01 15:49:12.470251] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:02.402 [2024-10-01 15:49:12.470258] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470274] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:02.402 [2024-10-01 15:49:12.470280] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470293] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:02.402 [2024-10-01 15:49:12.470297] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.402 [2024-10-01 15:49:12.470301] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:02.402 [2024-10-01 15:49:12.470313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.402 [2024-10-01 15:49:12.470362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:02.402 [2024-10-01 15:49:12.470371] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:02.402 [2024-10-01 15:49:12.470375] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:02.402 [2024-10-01 15:49:12.470378] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:02.402 [2024-10-01 15:49:12.470382] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:02.402 [2024-10-01 15:49:12.470387] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:02.402 [2024-10-01 15:49:12.470391] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:02.402 [2024-10-01 15:49:12.470394] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470401] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470409] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:02.402 [2024-10-01 15:49:12.470421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:02.402 [2024-10-01 15:49:12.470431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.402 [2024-10-01 15:49:12.470438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.402 [2024-10-01 15:49:12.470445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.402 [2024-10-01 15:49:12.470453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.402 [2024-10-01 15:49:12.470457] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470464] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:02.402 [2024-10-01 15:49:12.470484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:02.402 [2024-10-01 15:49:12.470489] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:02.402 [2024-10-01 15:49:12.470494] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470500] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470507] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:02.402 [2024-10-01 15:49:12.470522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:02.402 [2024-10-01 15:49:12.470571] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470577] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470585] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:02.402 [2024-10-01 15:49:12.470588] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:02.402 [2024-10-01 15:49:12.470591] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:02.402 [2024-10-01 15:49:12.470597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:02.402 [2024-10-01 15:49:12.470610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:02.402 [2024-10-01 15:49:12.470618] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:02.402 [2024-10-01 15:49:12.470628] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470635] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470641] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:02.402 [2024-10-01 15:49:12.470644] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.402 [2024-10-01 15:49:12.470648] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:02.402 [2024-10-01 15:49:12.470653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.402 [2024-10-01 15:49:12.470679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:02.402 [2024-10-01 15:49:12.470690] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470696] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470702] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:02.402 [2024-10-01 15:49:12.470706] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.402 [2024-10-01 15:49:12.470710] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:02.402 [2024-10-01 15:49:12.470716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.402 [2024-10-01 15:49:12.470728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:02.402 [2024-10-01 15:49:12.470736] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470741] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470748] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470753] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470758] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470762] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470767] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:02.402 [2024-10-01 15:49:12.470771] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:02.402 [2024-10-01 15:49:12.470775] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:02.402 [2024-10-01 15:49:12.470793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:02.402 [2024-10-01 15:49:12.470803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:02.402 [2024-10-01 15:49:12.470812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:02.402 [2024-10-01 15:49:12.470821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:02.402 [2024-10-01 15:49:12.470830] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:02.402 [2024-10-01 15:49:12.470842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:02.402 [2024-10-01 15:49:12.470852] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:02.402 [2024-10-01 15:49:12.470866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:02.403 [2024-10-01 15:49:12.470878] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:02.403 [2024-10-01 15:49:12.470883] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:02.403 [2024-10-01 15:49:12.470886] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:02.403 [2024-10-01 15:49:12.470889] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:02.403 [2024-10-01 15:49:12.470891] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:02.403 [2024-10-01 15:49:12.470897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:02.403 [2024-10-01 15:49:12.470905] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:02.403 [2024-10-01 15:49:12.470909] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:02.403 [2024-10-01 15:49:12.470912] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:02.403 [2024-10-01 15:49:12.470918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:02.403 [2024-10-01 15:49:12.470924] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:02.403 [2024-10-01 15:49:12.470927] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.403 [2024-10-01 15:49:12.470930] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:02.403 [2024-10-01 15:49:12.470936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.403 [2024-10-01 15:49:12.470942] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:02.403 [2024-10-01 15:49:12.470946] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:02.403 [2024-10-01 15:49:12.470949] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:02.403 [2024-10-01 15:49:12.470954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:02.403 [2024-10-01 15:49:12.470960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:02.403 [2024-10-01 15:49:12.470970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:02.403 [2024-10-01 15:49:12.470979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:02.403 [2024-10-01 15:49:12.470985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:02.403 ===================================================== 00:15:02.403 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.403 ===================================================== 00:15:02.403 Controller Capabilities/Features 00:15:02.403 ================================ 00:15:02.403 Vendor ID: 4e58 00:15:02.403 Subsystem Vendor ID: 4e58 00:15:02.403 Serial Number: SPDK1 00:15:02.403 Model Number: SPDK bdev Controller 00:15:02.403 Firmware Version: 25.01 00:15:02.403 Recommended Arb Burst: 6 00:15:02.403 IEEE OUI Identifier: 8d 6b 50 00:15:02.403 Multi-path I/O 00:15:02.403 May have multiple subsystem ports: Yes 00:15:02.403 May have multiple controllers: Yes 00:15:02.403 Associated with SR-IOV VF: No 00:15:02.403 Max Data Transfer Size: 131072 00:15:02.403 Max Number of Namespaces: 32 00:15:02.403 Max Number of I/O Queues: 127 00:15:02.403 NVMe Specification Version (VS): 1.3 00:15:02.403 NVMe Specification Version (Identify): 1.3 00:15:02.403 Maximum Queue Entries: 256 00:15:02.403 Contiguous Queues Required: Yes 00:15:02.403 Arbitration Mechanisms Supported 00:15:02.403 Weighted Round Robin: Not Supported 00:15:02.403 Vendor Specific: Not Supported 00:15:02.403 Reset Timeout: 15000 ms 00:15:02.403 Doorbell Stride: 4 bytes 00:15:02.403 NVM Subsystem Reset: Not Supported 00:15:02.403 Command Sets Supported 00:15:02.403 NVM Command Set: Supported 00:15:02.403 Boot Partition: Not Supported 00:15:02.403 Memory Page Size Minimum: 4096 bytes 00:15:02.403 Memory Page Size Maximum: 4096 bytes 00:15:02.403 Persistent Memory Region: Not Supported 00:15:02.403 Optional Asynchronous Events Supported 00:15:02.403 Namespace Attribute Notices: Supported 00:15:02.403 Firmware Activation Notices: Not Supported 00:15:02.403 ANA Change Notices: Not Supported 00:15:02.403 PLE Aggregate Log Change Notices: Not Supported 00:15:02.403 LBA Status Info Alert Notices: Not Supported 00:15:02.403 EGE Aggregate Log Change Notices: Not Supported 00:15:02.403 Normal NVM Subsystem Shutdown event: Not Supported 00:15:02.403 Zone Descriptor Change Notices: Not Supported 00:15:02.403 Discovery Log Change Notices: Not Supported 00:15:02.403 Controller Attributes 00:15:02.403 128-bit Host Identifier: Supported 00:15:02.403 Non-Operational Permissive Mode: Not Supported 00:15:02.403 NVM Sets: Not Supported 00:15:02.403 Read Recovery Levels: Not Supported 00:15:02.403 Endurance Groups: Not Supported 00:15:02.403 Predictable Latency Mode: Not Supported 00:15:02.403 Traffic Based Keep ALive: Not Supported 00:15:02.403 Namespace Granularity: Not Supported 00:15:02.403 SQ Associations: Not Supported 00:15:02.403 UUID List: Not Supported 00:15:02.403 Multi-Domain Subsystem: Not Supported 00:15:02.403 Fixed Capacity Management: Not Supported 00:15:02.403 Variable Capacity Management: Not Supported 00:15:02.403 Delete Endurance Group: Not Supported 00:15:02.403 Delete NVM Set: Not Supported 00:15:02.403 Extended LBA Formats Supported: Not Supported 00:15:02.403 Flexible Data Placement Supported: Not Supported 00:15:02.403 00:15:02.403 Controller Memory Buffer Support 00:15:02.403 ================================ 00:15:02.403 Supported: No 00:15:02.403 00:15:02.403 Persistent Memory Region Support 00:15:02.403 ================================ 00:15:02.403 Supported: No 00:15:02.403 00:15:02.403 Admin Command Set Attributes 00:15:02.403 ============================ 00:15:02.403 Security Send/Receive: Not Supported 00:15:02.403 Format NVM: Not Supported 00:15:02.403 Firmware Activate/Download: Not Supported 00:15:02.403 Namespace Management: Not Supported 00:15:02.403 Device Self-Test: Not Supported 00:15:02.403 Directives: Not Supported 00:15:02.403 NVMe-MI: Not Supported 00:15:02.403 Virtualization Management: Not Supported 00:15:02.403 Doorbell Buffer Config: Not Supported 00:15:02.403 Get LBA Status Capability: Not Supported 00:15:02.403 Command & Feature Lockdown Capability: Not Supported 00:15:02.403 Abort Command Limit: 4 00:15:02.403 Async Event Request Limit: 4 00:15:02.403 Number of Firmware Slots: N/A 00:15:02.403 Firmware Slot 1 Read-Only: N/A 00:15:02.403 Firmware Activation Without Reset: N/A 00:15:02.403 Multiple Update Detection Support: N/A 00:15:02.403 Firmware Update Granularity: No Information Provided 00:15:02.403 Per-Namespace SMART Log: No 00:15:02.403 Asymmetric Namespace Access Log Page: Not Supported 00:15:02.403 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:02.403 Command Effects Log Page: Supported 00:15:02.403 Get Log Page Extended Data: Supported 00:15:02.403 Telemetry Log Pages: Not Supported 00:15:02.403 Persistent Event Log Pages: Not Supported 00:15:02.403 Supported Log Pages Log Page: May Support 00:15:02.403 Commands Supported & Effects Log Page: Not Supported 00:15:02.403 Feature Identifiers & Effects Log Page:May Support 00:15:02.403 NVMe-MI Commands & Effects Log Page: May Support 00:15:02.403 Data Area 4 for Telemetry Log: Not Supported 00:15:02.403 Error Log Page Entries Supported: 128 00:15:02.403 Keep Alive: Supported 00:15:02.403 Keep Alive Granularity: 10000 ms 00:15:02.403 00:15:02.403 NVM Command Set Attributes 00:15:02.403 ========================== 00:15:02.403 Submission Queue Entry Size 00:15:02.403 Max: 64 00:15:02.403 Min: 64 00:15:02.403 Completion Queue Entry Size 00:15:02.403 Max: 16 00:15:02.403 Min: 16 00:15:02.403 Number of Namespaces: 32 00:15:02.403 Compare Command: Supported 00:15:02.403 Write Uncorrectable Command: Not Supported 00:15:02.403 Dataset Management Command: Supported 00:15:02.403 Write Zeroes Command: Supported 00:15:02.403 Set Features Save Field: Not Supported 00:15:02.403 Reservations: Not Supported 00:15:02.403 Timestamp: Not Supported 00:15:02.403 Copy: Supported 00:15:02.403 Volatile Write Cache: Present 00:15:02.403 Atomic Write Unit (Normal): 1 00:15:02.403 Atomic Write Unit (PFail): 1 00:15:02.403 Atomic Compare & Write Unit: 1 00:15:02.403 Fused Compare & Write: Supported 00:15:02.403 Scatter-Gather List 00:15:02.403 SGL Command Set: Supported (Dword aligned) 00:15:02.403 SGL Keyed: Not Supported 00:15:02.403 SGL Bit Bucket Descriptor: Not Supported 00:15:02.403 SGL Metadata Pointer: Not Supported 00:15:02.403 Oversized SGL: Not Supported 00:15:02.403 SGL Metadata Address: Not Supported 00:15:02.403 SGL Offset: Not Supported 00:15:02.403 Transport SGL Data Block: Not Supported 00:15:02.403 Replay Protected Memory Block: Not Supported 00:15:02.403 00:15:02.403 Firmware Slot Information 00:15:02.403 ========================= 00:15:02.403 Active slot: 1 00:15:02.403 Slot 1 Firmware Revision: 25.01 00:15:02.403 00:15:02.403 00:15:02.403 Commands Supported and Effects 00:15:02.403 ============================== 00:15:02.403 Admin Commands 00:15:02.403 -------------- 00:15:02.403 Get Log Page (02h): Supported 00:15:02.403 Identify (06h): Supported 00:15:02.404 Abort (08h): Supported 00:15:02.404 Set Features (09h): Supported 00:15:02.404 Get Features (0Ah): Supported 00:15:02.404 Asynchronous Event Request (0Ch): Supported 00:15:02.404 Keep Alive (18h): Supported 00:15:02.404 I/O Commands 00:15:02.404 ------------ 00:15:02.404 Flush (00h): Supported LBA-Change 00:15:02.404 Write (01h): Supported LBA-Change 00:15:02.404 Read (02h): Supported 00:15:02.404 Compare (05h): Supported 00:15:02.404 Write Zeroes (08h): Supported LBA-Change 00:15:02.404 Dataset Management (09h): Supported LBA-Change 00:15:02.404 Copy (19h): Supported LBA-Change 00:15:02.404 00:15:02.404 Error Log 00:15:02.404 ========= 00:15:02.404 00:15:02.404 Arbitration 00:15:02.404 =========== 00:15:02.404 Arbitration Burst: 1 00:15:02.404 00:15:02.404 Power Management 00:15:02.404 ================ 00:15:02.404 Number of Power States: 1 00:15:02.404 Current Power State: Power State #0 00:15:02.404 Power State #0: 00:15:02.404 Max Power: 0.00 W 00:15:02.404 Non-Operational State: Operational 00:15:02.404 Entry Latency: Not Reported 00:15:02.404 Exit Latency: Not Reported 00:15:02.404 Relative Read Throughput: 0 00:15:02.404 Relative Read Latency: 0 00:15:02.404 Relative Write Throughput: 0 00:15:02.404 Relative Write Latency: 0 00:15:02.404 Idle Power: Not Reported 00:15:02.404 Active Power: Not Reported 00:15:02.404 Non-Operational Permissive Mode: Not Supported 00:15:02.404 00:15:02.404 Health Information 00:15:02.404 ================== 00:15:02.404 Critical Warnings: 00:15:02.404 Available Spare Space: OK 00:15:02.404 Temperature: OK 00:15:02.404 Device Reliability: OK 00:15:02.404 Read Only: No 00:15:02.404 Volatile Memory Backup: OK 00:15:02.404 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:02.404 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:02.404 Available Spare: 0% 00:15:02.404 Available Sp[2024-10-01 15:49:12.471065] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:02.404 [2024-10-01 15:49:12.471077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:02.404 [2024-10-01 15:49:12.471102] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:02.404 [2024-10-01 15:49:12.471111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.404 [2024-10-01 15:49:12.471117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.404 [2024-10-01 15:49:12.471122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.404 [2024-10-01 15:49:12.471128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.404 [2024-10-01 15:49:12.471252] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:02.404 [2024-10-01 15:49:12.471261] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:02.404 [2024-10-01 15:49:12.472254] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.404 [2024-10-01 15:49:12.472302] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:02.404 [2024-10-01 15:49:12.472308] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:02.404 [2024-10-01 15:49:12.473266] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:02.404 [2024-10-01 15:49:12.473276] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:02.404 [2024-10-01 15:49:12.473328] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:02.404 [2024-10-01 15:49:12.475867] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:02.404 are Threshold: 0% 00:15:02.404 Life Percentage Used: 0% 00:15:02.404 Data Units Read: 0 00:15:02.404 Data Units Written: 0 00:15:02.404 Host Read Commands: 0 00:15:02.404 Host Write Commands: 0 00:15:02.404 Controller Busy Time: 0 minutes 00:15:02.404 Power Cycles: 0 00:15:02.404 Power On Hours: 0 hours 00:15:02.404 Unsafe Shutdowns: 0 00:15:02.404 Unrecoverable Media Errors: 0 00:15:02.404 Lifetime Error Log Entries: 0 00:15:02.404 Warning Temperature Time: 0 minutes 00:15:02.404 Critical Temperature Time: 0 minutes 00:15:02.404 00:15:02.404 Number of Queues 00:15:02.404 ================ 00:15:02.404 Number of I/O Submission Queues: 127 00:15:02.404 Number of I/O Completion Queues: 127 00:15:02.404 00:15:02.404 Active Namespaces 00:15:02.404 ================= 00:15:02.404 Namespace ID:1 00:15:02.404 Error Recovery Timeout: Unlimited 00:15:02.404 Command Set Identifier: NVM (00h) 00:15:02.404 Deallocate: Supported 00:15:02.404 Deallocated/Unwritten Error: Not Supported 00:15:02.404 Deallocated Read Value: Unknown 00:15:02.404 Deallocate in Write Zeroes: Not Supported 00:15:02.404 Deallocated Guard Field: 0xFFFF 00:15:02.404 Flush: Supported 00:15:02.404 Reservation: Supported 00:15:02.404 Namespace Sharing Capabilities: Multiple Controllers 00:15:02.404 Size (in LBAs): 131072 (0GiB) 00:15:02.404 Capacity (in LBAs): 131072 (0GiB) 00:15:02.404 Utilization (in LBAs): 131072 (0GiB) 00:15:02.404 NGUID: 1BB3F07AD4EB42649F641D0A57409C4E 00:15:02.404 UUID: 1bb3f07a-d4eb-4264-9f64-1d0a57409c4e 00:15:02.404 Thin Provisioning: Not Supported 00:15:02.404 Per-NS Atomic Units: Yes 00:15:02.404 Atomic Boundary Size (Normal): 0 00:15:02.404 Atomic Boundary Size (PFail): 0 00:15:02.404 Atomic Boundary Offset: 0 00:15:02.404 Maximum Single Source Range Length: 65535 00:15:02.404 Maximum Copy Length: 65535 00:15:02.404 Maximum Source Range Count: 1 00:15:02.404 NGUID/EUI64 Never Reused: No 00:15:02.404 Namespace Write Protected: No 00:15:02.404 Number of LBA Formats: 1 00:15:02.404 Current LBA Format: LBA Format #00 00:15:02.404 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:02.404 00:15:02.404 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:02.663 [2024-10-01 15:49:12.686242] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.933 Initializing NVMe Controllers 00:15:07.933 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:07.933 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:07.933 Initialization complete. Launching workers. 00:15:07.933 ======================================================== 00:15:07.933 Latency(us) 00:15:07.933 Device Information : IOPS MiB/s Average min max 00:15:07.933 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39922.03 155.95 3205.85 960.39 8625.54 00:15:07.933 ======================================================== 00:15:07.933 Total : 39922.03 155.95 3205.85 960.39 8625.54 00:15:07.933 00:15:07.933 [2024-10-01 15:49:17.704302] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.933 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:07.933 [2024-10-01 15:49:17.919303] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.310 Initializing NVMe Controllers 00:15:13.310 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:13.310 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:13.310 Initialization complete. Launching workers. 00:15:13.310 ======================================================== 00:15:13.310 Latency(us) 00:15:13.310 Device Information : IOPS MiB/s Average min max 00:15:13.310 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7984.56 5985.20 10975.17 00:15:13.310 ======================================================== 00:15:13.310 Total : 16051.20 62.70 7984.56 5985.20 10975.17 00:15:13.310 00:15:13.310 [2024-10-01 15:49:22.960449] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.310 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:13.310 [2024-10-01 15:49:23.154357] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:18.612 [2024-10-01 15:49:28.227151] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:18.612 Initializing NVMe Controllers 00:15:18.612 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:18.612 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:18.612 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:18.612 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:18.612 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:18.612 Initialization complete. Launching workers. 00:15:18.612 Starting thread on core 2 00:15:18.612 Starting thread on core 3 00:15:18.612 Starting thread on core 1 00:15:18.612 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:18.612 [2024-10-01 15:49:28.499351] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.898 [2024-10-01 15:49:31.561487] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.898 Initializing NVMe Controllers 00:15:21.898 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.898 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.898 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:21.898 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:21.898 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:21.898 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:21.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:21.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:21.898 Initialization complete. Launching workers. 00:15:21.898 Starting thread on core 1 with urgent priority queue 00:15:21.898 Starting thread on core 2 with urgent priority queue 00:15:21.898 Starting thread on core 3 with urgent priority queue 00:15:21.898 Starting thread on core 0 with urgent priority queue 00:15:21.898 SPDK bdev Controller (SPDK1 ) core 0: 8601.00 IO/s 11.63 secs/100000 ios 00:15:21.898 SPDK bdev Controller (SPDK1 ) core 1: 7711.67 IO/s 12.97 secs/100000 ios 00:15:21.898 SPDK bdev Controller (SPDK1 ) core 2: 8117.67 IO/s 12.32 secs/100000 ios 00:15:21.898 SPDK bdev Controller (SPDK1 ) core 3: 10121.00 IO/s 9.88 secs/100000 ios 00:15:21.898 ======================================================== 00:15:21.898 00:15:21.898 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:21.898 [2024-10-01 15:49:31.838305] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.898 Initializing NVMe Controllers 00:15:21.898 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.898 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.898 Namespace ID: 1 size: 0GB 00:15:21.898 Initialization complete. 00:15:21.898 INFO: using host memory buffer for IO 00:15:21.898 Hello world! 00:15:21.898 [2024-10-01 15:49:31.872517] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.898 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:22.157 [2024-10-01 15:49:32.141274] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:23.094 Initializing NVMe Controllers 00:15:23.094 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.094 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.094 Initialization complete. Launching workers. 00:15:23.094 submit (in ns) avg, min, max = 7942.4, 3147.6, 4994960.0 00:15:23.094 complete (in ns) avg, min, max = 18300.4, 1722.9, 4992413.3 00:15:23.094 00:15:23.094 Submit histogram 00:15:23.094 ================ 00:15:23.094 Range in us Cumulative Count 00:15:23.094 3.139 - 3.154: 0.0060% ( 1) 00:15:23.094 3.154 - 3.170: 0.0120% ( 1) 00:15:23.094 3.170 - 3.185: 0.0180% ( 1) 00:15:23.094 3.185 - 3.200: 0.0360% ( 3) 00:15:23.094 3.200 - 3.215: 0.3238% ( 48) 00:15:23.094 3.215 - 3.230: 2.1289% ( 301) 00:15:23.094 3.230 - 3.246: 6.5787% ( 742) 00:15:23.094 3.246 - 3.261: 11.8561% ( 880) 00:15:23.094 3.261 - 3.276: 17.9190% ( 1011) 00:15:23.094 3.276 - 3.291: 24.8996% ( 1164) 00:15:23.094 3.291 - 3.307: 31.2324% ( 1056) 00:15:23.094 3.307 - 3.322: 37.4693% ( 1040) 00:15:23.094 3.322 - 3.337: 43.4903% ( 1004) 00:15:23.094 3.337 - 3.352: 49.3013% ( 969) 00:15:23.094 3.352 - 3.368: 54.5067% ( 868) 00:15:23.094 3.368 - 3.383: 62.1529% ( 1275) 00:15:23.094 3.383 - 3.398: 69.2714% ( 1187) 00:15:23.094 3.398 - 3.413: 74.5367% ( 878) 00:15:23.094 3.413 - 3.429: 79.2024% ( 778) 00:15:23.094 3.429 - 3.444: 82.3868% ( 531) 00:15:23.094 3.444 - 3.459: 84.8816% ( 416) 00:15:23.094 3.459 - 3.474: 86.2549% ( 229) 00:15:23.094 3.474 - 3.490: 87.0585% ( 134) 00:15:23.094 3.490 - 3.505: 87.5022% ( 74) 00:15:23.094 3.505 - 3.520: 87.8981% ( 66) 00:15:23.094 3.520 - 3.535: 88.4918% ( 99) 00:15:23.094 3.535 - 3.550: 89.2354% ( 124) 00:15:23.094 3.550 - 3.566: 90.2189% ( 164) 00:15:23.094 3.566 - 3.581: 91.2324% ( 169) 00:15:23.094 3.581 - 3.596: 92.1559% ( 154) 00:15:23.094 3.596 - 3.611: 93.1934% ( 173) 00:15:23.094 3.611 - 3.627: 94.0630% ( 145) 00:15:23.094 3.627 - 3.642: 95.0525% ( 165) 00:15:23.094 3.642 - 3.657: 96.0060% ( 159) 00:15:23.094 3.657 - 3.672: 96.8936% ( 148) 00:15:23.094 3.672 - 3.688: 97.6552% ( 127) 00:15:23.094 3.688 - 3.703: 98.0990% ( 74) 00:15:23.094 3.703 - 3.718: 98.5847% ( 81) 00:15:23.094 3.718 - 3.733: 98.8846% ( 50) 00:15:23.094 3.733 - 3.749: 99.2324% ( 58) 00:15:23.094 3.749 - 3.764: 99.3523% ( 20) 00:15:23.094 3.764 - 3.779: 99.4903% ( 23) 00:15:23.094 3.779 - 3.794: 99.5862% ( 16) 00:15:23.094 3.794 - 3.810: 99.6342% ( 8) 00:15:23.094 3.810 - 3.825: 99.6462% ( 2) 00:15:23.094 3.840 - 3.855: 99.6522% ( 1) 00:15:23.094 3.931 - 3.962: 99.6582% ( 1) 00:15:23.094 4.937 - 4.968: 99.6642% ( 1) 00:15:23.094 4.968 - 4.998: 99.6702% ( 1) 00:15:23.094 4.998 - 5.029: 99.6762% ( 1) 00:15:23.094 5.090 - 5.120: 99.6882% ( 2) 00:15:23.094 5.242 - 5.272: 99.6942% ( 1) 00:15:23.094 5.333 - 5.364: 99.7001% ( 1) 00:15:23.094 5.364 - 5.394: 99.7121% ( 2) 00:15:23.094 5.608 - 5.638: 99.7181% ( 1) 00:15:23.094 5.638 - 5.669: 99.7241% ( 1) 00:15:23.094 5.669 - 5.699: 99.7301% ( 1) 00:15:23.094 5.699 - 5.730: 99.7361% ( 1) 00:15:23.094 5.730 - 5.760: 99.7481% ( 2) 00:15:23.094 5.821 - 5.851: 99.7541% ( 1) 00:15:23.094 5.973 - 6.004: 99.7601% ( 1) 00:15:23.094 6.004 - 6.034: 99.7661% ( 1) 00:15:23.094 6.217 - 6.248: 99.7721% ( 1) 00:15:23.094 6.278 - 6.309: 99.7841% ( 2) 00:15:23.094 6.309 - 6.339: 99.7901% ( 1) 00:15:23.094 6.370 - 6.400: 99.8021% ( 2) 00:15:23.094 6.522 - 6.552: 99.8141% ( 2) 00:15:23.094 6.613 - 6.644: 99.8201% ( 1) 00:15:23.094 6.766 - 6.796: 99.8261% ( 1) 00:15:23.094 7.070 - 7.101: 99.8321% ( 1) 00:15:23.094 7.101 - 7.131: 99.8381% ( 1) 00:15:23.094 7.131 - 7.162: 99.8441% ( 1) 00:15:23.094 7.314 - 7.345: 99.8501% ( 1) 00:15:23.094 7.558 - 7.589: 99.8561% ( 1) 00:15:23.094 7.589 - 7.619: 99.8621% ( 1) 00:15:23.094 7.741 - 7.771: 99.8681% ( 1) 00:15:23.094 7.802 - 7.863: 99.8741% ( 1) 00:15:23.094 8.168 - 8.229: 99.8801% ( 1) 00:15:23.094 13.958 - 14.019: 99.8861% ( 1) 00:15:23.094 [2024-10-01 15:49:33.163185] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:23.094 3167.573 - 3183.177: 99.8921% ( 1) 00:15:23.094 3994.575 - 4025.783: 99.9940% ( 17) 00:15:23.094 4993.219 - 5024.427: 100.0000% ( 1) 00:15:23.094 00:15:23.094 Complete histogram 00:15:23.094 ================== 00:15:23.094 Range in us Cumulative Count 00:15:23.094 1.722 - 1.730: 0.0060% ( 1) 00:15:23.094 1.745 - 1.752: 0.0180% ( 2) 00:15:23.094 1.752 - 1.760: 0.2219% ( 34) 00:15:23.094 1.760 - 1.768: 1.0735% ( 142) 00:15:23.094 1.768 - 1.775: 2.2729% ( 200) 00:15:23.094 1.775 - 1.783: 3.4123% ( 190) 00:15:23.094 1.783 - 1.790: 4.4498% ( 173) 00:15:23.094 1.790 - 1.798: 5.1034% ( 109) 00:15:23.094 1.798 - 1.806: 8.0720% ( 495) 00:15:23.094 1.806 - 1.813: 24.6237% ( 2760) 00:15:23.094 1.813 - 1.821: 55.0405% ( 5072) 00:15:23.094 1.821 - 1.829: 78.7346% ( 3951) 00:15:23.094 1.829 - 1.836: 89.5832% ( 1809) 00:15:23.094 1.836 - 1.844: 94.1229% ( 757) 00:15:23.094 1.844 - 1.851: 96.3118% ( 365) 00:15:23.094 1.851 - 1.859: 97.3673% ( 176) 00:15:23.094 1.859 - 1.867: 97.7511% ( 64) 00:15:23.094 1.867 - 1.874: 97.9970% ( 41) 00:15:23.094 1.874 - 1.882: 98.2489% ( 42) 00:15:23.094 1.882 - 1.890: 98.5907% ( 57) 00:15:23.094 1.890 - 1.897: 98.9325% ( 57) 00:15:23.094 1.897 - 1.905: 99.2144% ( 47) 00:15:23.094 1.905 - 1.912: 99.3523% ( 23) 00:15:23.094 1.912 - 1.920: 99.4003% ( 8) 00:15:23.094 1.920 - 1.928: 99.4303% ( 5) 00:15:23.094 1.928 - 1.935: 99.4423% ( 2) 00:15:23.094 1.935 - 1.943: 99.4483% ( 1) 00:15:23.094 1.996 - 2.011: 99.4603% ( 2) 00:15:23.094 2.072 - 2.088: 99.4663% ( 1) 00:15:23.094 2.210 - 2.225: 99.4723% ( 1) 00:15:23.094 3.688 - 3.703: 99.4783% ( 1) 00:15:23.094 3.870 - 3.886: 99.4843% ( 1) 00:15:23.094 3.931 - 3.962: 99.4903% ( 1) 00:15:23.094 4.145 - 4.175: 99.4963% ( 1) 00:15:23.094 4.206 - 4.236: 99.5022% ( 1) 00:15:23.094 4.510 - 4.541: 99.5082% ( 1) 00:15:23.094 4.571 - 4.602: 99.5142% ( 1) 00:15:23.094 4.693 - 4.724: 99.5202% ( 1) 00:15:23.094 4.876 - 4.907: 99.5262% ( 1) 00:15:23.094 5.181 - 5.211: 99.5322% ( 1) 00:15:23.094 5.303 - 5.333: 99.5382% ( 1) 00:15:23.094 5.516 - 5.547: 99.5442% ( 1) 00:15:23.094 5.699 - 5.730: 99.5502% ( 1) 00:15:23.094 5.790 - 5.821: 99.5562% ( 1) 00:15:23.094 5.882 - 5.912: 99.5622% ( 1) 00:15:23.094 6.034 - 6.065: 99.5682% ( 1) 00:15:23.094 6.095 - 6.126: 99.5742% ( 1) 00:15:23.094 6.278 - 6.309: 99.5802% ( 1) 00:15:23.094 8.046 - 8.107: 99.5862% ( 1) 00:15:23.094 3011.535 - 3027.139: 99.5982% ( 2) 00:15:23.094 3994.575 - 4025.783: 99.9880% ( 65) 00:15:23.094 4119.406 - 4150.613: 99.9940% ( 1) 00:15:23.094 4962.011 - 4993.219: 100.0000% ( 1) 00:15:23.094 00:15:23.094 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:23.094 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:23.094 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:23.094 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:23.094 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:23.353 [ 00:15:23.353 { 00:15:23.353 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:23.353 "subtype": "Discovery", 00:15:23.353 "listen_addresses": [], 00:15:23.353 "allow_any_host": true, 00:15:23.353 "hosts": [] 00:15:23.353 }, 00:15:23.353 { 00:15:23.353 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:23.353 "subtype": "NVMe", 00:15:23.353 "listen_addresses": [ 00:15:23.353 { 00:15:23.353 "trtype": "VFIOUSER", 00:15:23.353 "adrfam": "IPv4", 00:15:23.353 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:23.353 "trsvcid": "0" 00:15:23.353 } 00:15:23.353 ], 00:15:23.354 "allow_any_host": true, 00:15:23.354 "hosts": [], 00:15:23.354 "serial_number": "SPDK1", 00:15:23.354 "model_number": "SPDK bdev Controller", 00:15:23.354 "max_namespaces": 32, 00:15:23.354 "min_cntlid": 1, 00:15:23.354 "max_cntlid": 65519, 00:15:23.354 "namespaces": [ 00:15:23.354 { 00:15:23.354 "nsid": 1, 00:15:23.354 "bdev_name": "Malloc1", 00:15:23.354 "name": "Malloc1", 00:15:23.354 "nguid": "1BB3F07AD4EB42649F641D0A57409C4E", 00:15:23.354 "uuid": "1bb3f07a-d4eb-4264-9f64-1d0a57409c4e" 00:15:23.354 } 00:15:23.354 ] 00:15:23.354 }, 00:15:23.354 { 00:15:23.354 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:23.354 "subtype": "NVMe", 00:15:23.354 "listen_addresses": [ 00:15:23.354 { 00:15:23.354 "trtype": "VFIOUSER", 00:15:23.354 "adrfam": "IPv4", 00:15:23.354 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:23.354 "trsvcid": "0" 00:15:23.354 } 00:15:23.354 ], 00:15:23.354 "allow_any_host": true, 00:15:23.354 "hosts": [], 00:15:23.354 "serial_number": "SPDK2", 00:15:23.354 "model_number": "SPDK bdev Controller", 00:15:23.354 "max_namespaces": 32, 00:15:23.354 "min_cntlid": 1, 00:15:23.354 "max_cntlid": 65519, 00:15:23.354 "namespaces": [ 00:15:23.354 { 00:15:23.354 "nsid": 1, 00:15:23.354 "bdev_name": "Malloc2", 00:15:23.354 "name": "Malloc2", 00:15:23.354 "nguid": "0868AC3EFBE54BB5AA93EB421E7674E1", 00:15:23.354 "uuid": "0868ac3e-fbe5-4bb5-aa93-eb421e7674e1" 00:15:23.354 } 00:15:23.354 ] 00:15:23.354 } 00:15:23.354 ] 00:15:23.354 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:23.354 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:23.354 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2402696 00:15:23.354 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:23.354 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:23.354 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:23.354 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:23.354 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:23.354 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:23.354 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:23.354 [2024-10-01 15:49:33.544305] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:23.613 Malloc3 00:15:23.613 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:23.613 [2024-10-01 15:49:33.781101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:23.872 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:23.872 Asynchronous Event Request test 00:15:23.872 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.872 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.872 Registering asynchronous event callbacks... 00:15:23.872 Starting namespace attribute notice tests for all controllers... 00:15:23.872 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:23.872 aer_cb - Changed Namespace 00:15:23.872 Cleaning up... 00:15:23.872 [ 00:15:23.872 { 00:15:23.872 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:23.872 "subtype": "Discovery", 00:15:23.872 "listen_addresses": [], 00:15:23.872 "allow_any_host": true, 00:15:23.872 "hosts": [] 00:15:23.872 }, 00:15:23.872 { 00:15:23.872 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:23.872 "subtype": "NVMe", 00:15:23.872 "listen_addresses": [ 00:15:23.872 { 00:15:23.872 "trtype": "VFIOUSER", 00:15:23.872 "adrfam": "IPv4", 00:15:23.872 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:23.872 "trsvcid": "0" 00:15:23.872 } 00:15:23.872 ], 00:15:23.872 "allow_any_host": true, 00:15:23.872 "hosts": [], 00:15:23.872 "serial_number": "SPDK1", 00:15:23.872 "model_number": "SPDK bdev Controller", 00:15:23.872 "max_namespaces": 32, 00:15:23.872 "min_cntlid": 1, 00:15:23.872 "max_cntlid": 65519, 00:15:23.872 "namespaces": [ 00:15:23.872 { 00:15:23.872 "nsid": 1, 00:15:23.872 "bdev_name": "Malloc1", 00:15:23.872 "name": "Malloc1", 00:15:23.872 "nguid": "1BB3F07AD4EB42649F641D0A57409C4E", 00:15:23.872 "uuid": "1bb3f07a-d4eb-4264-9f64-1d0a57409c4e" 00:15:23.872 }, 00:15:23.872 { 00:15:23.872 "nsid": 2, 00:15:23.872 "bdev_name": "Malloc3", 00:15:23.872 "name": "Malloc3", 00:15:23.872 "nguid": "92B4FFBC39384518A7E3E866A7B13ED6", 00:15:23.872 "uuid": "92b4ffbc-3938-4518-a7e3-e866a7b13ed6" 00:15:23.872 } 00:15:23.872 ] 00:15:23.872 }, 00:15:23.872 { 00:15:23.872 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:23.872 "subtype": "NVMe", 00:15:23.872 "listen_addresses": [ 00:15:23.872 { 00:15:23.872 "trtype": "VFIOUSER", 00:15:23.872 "adrfam": "IPv4", 00:15:23.872 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:23.872 "trsvcid": "0" 00:15:23.872 } 00:15:23.872 ], 00:15:23.872 "allow_any_host": true, 00:15:23.872 "hosts": [], 00:15:23.872 "serial_number": "SPDK2", 00:15:23.872 "model_number": "SPDK bdev Controller", 00:15:23.872 "max_namespaces": 32, 00:15:23.872 "min_cntlid": 1, 00:15:23.872 "max_cntlid": 65519, 00:15:23.872 "namespaces": [ 00:15:23.872 { 00:15:23.872 "nsid": 1, 00:15:23.872 "bdev_name": "Malloc2", 00:15:23.872 "name": "Malloc2", 00:15:23.872 "nguid": "0868AC3EFBE54BB5AA93EB421E7674E1", 00:15:23.872 "uuid": "0868ac3e-fbe5-4bb5-aa93-eb421e7674e1" 00:15:23.872 } 00:15:23.872 ] 00:15:23.872 } 00:15:23.872 ] 00:15:23.872 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2402696 00:15:23.872 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:23.872 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:23.872 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:23.872 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:23.872 [2024-10-01 15:49:34.011314] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:15:23.872 [2024-10-01 15:49:34.011342] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2402710 ] 00:15:23.872 [2024-10-01 15:49:34.038050] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:23.872 [2024-10-01 15:49:34.046107] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:23.872 [2024-10-01 15:49:34.046132] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa826562000 00:15:23.872 [2024-10-01 15:49:34.047109] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.872 [2024-10-01 15:49:34.048113] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.872 [2024-10-01 15:49:34.049118] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.872 [2024-10-01 15:49:34.050127] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:23.872 [2024-10-01 15:49:34.051130] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:23.872 [2024-10-01 15:49:34.052142] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.872 [2024-10-01 15:49:34.053149] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:23.872 [2024-10-01 15:49:34.054160] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:23.872 [2024-10-01 15:49:34.055167] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:23.873 [2024-10-01 15:49:34.055176] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa826557000 00:15:23.873 [2024-10-01 15:49:34.056233] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:24.133 [2024-10-01 15:49:34.067551] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:24.133 [2024-10-01 15:49:34.067577] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:24.133 [2024-10-01 15:49:34.072663] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:24.133 [2024-10-01 15:49:34.072698] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:24.133 [2024-10-01 15:49:34.072767] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:24.133 [2024-10-01 15:49:34.072783] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:24.133 [2024-10-01 15:49:34.072788] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:24.133 [2024-10-01 15:49:34.073666] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:24.134 [2024-10-01 15:49:34.073676] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:24.134 [2024-10-01 15:49:34.073682] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:24.134 [2024-10-01 15:49:34.074672] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:24.134 [2024-10-01 15:49:34.074681] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:24.134 [2024-10-01 15:49:34.074687] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:24.134 [2024-10-01 15:49:34.075678] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:24.134 [2024-10-01 15:49:34.075686] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:24.134 [2024-10-01 15:49:34.076681] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:24.134 [2024-10-01 15:49:34.076689] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:24.134 [2024-10-01 15:49:34.076694] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:24.134 [2024-10-01 15:49:34.076699] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:24.134 [2024-10-01 15:49:34.076804] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:24.134 [2024-10-01 15:49:34.076809] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:24.134 [2024-10-01 15:49:34.076813] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:24.134 [2024-10-01 15:49:34.077688] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:24.134 [2024-10-01 15:49:34.078700] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:24.134 [2024-10-01 15:49:34.079711] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:24.134 [2024-10-01 15:49:34.080716] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.134 [2024-10-01 15:49:34.080754] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:24.134 [2024-10-01 15:49:34.081723] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:24.134 [2024-10-01 15:49:34.081731] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:24.134 [2024-10-01 15:49:34.081735] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.081752] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:24.134 [2024-10-01 15:49:34.081759] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.081770] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:24.134 [2024-10-01 15:49:34.081775] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:24.134 [2024-10-01 15:49:34.081778] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:24.134 [2024-10-01 15:49:34.081788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:24.134 [2024-10-01 15:49:34.089870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:24.134 [2024-10-01 15:49:34.089882] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:24.134 [2024-10-01 15:49:34.089887] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:24.134 [2024-10-01 15:49:34.089893] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:24.134 [2024-10-01 15:49:34.089897] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:24.134 [2024-10-01 15:49:34.089901] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:24.134 [2024-10-01 15:49:34.089905] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:24.134 [2024-10-01 15:49:34.089909] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.089916] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.089925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:24.134 [2024-10-01 15:49:34.097866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:24.134 [2024-10-01 15:49:34.097878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.134 [2024-10-01 15:49:34.097886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.134 [2024-10-01 15:49:34.097893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.134 [2024-10-01 15:49:34.097900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.134 [2024-10-01 15:49:34.097904] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.097913] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.097921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:24.134 [2024-10-01 15:49:34.105866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:24.134 [2024-10-01 15:49:34.105873] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:24.134 [2024-10-01 15:49:34.105878] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.105884] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.105891] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.105899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:24.134 [2024-10-01 15:49:34.113868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:24.134 [2024-10-01 15:49:34.113920] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.113927] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.113936] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:24.134 [2024-10-01 15:49:34.113941] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:24.134 [2024-10-01 15:49:34.113944] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:24.134 [2024-10-01 15:49:34.113950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:24.134 [2024-10-01 15:49:34.121868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:24.134 [2024-10-01 15:49:34.121879] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:24.134 [2024-10-01 15:49:34.121890] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.121897] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.121903] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:24.134 [2024-10-01 15:49:34.121907] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:24.134 [2024-10-01 15:49:34.121910] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:24.134 [2024-10-01 15:49:34.121916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:24.134 [2024-10-01 15:49:34.129868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:24.134 [2024-10-01 15:49:34.129881] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.129888] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.129895] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:24.134 [2024-10-01 15:49:34.129898] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:24.134 [2024-10-01 15:49:34.129901] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:24.134 [2024-10-01 15:49:34.129907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:24.134 [2024-10-01 15:49:34.137867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:24.134 [2024-10-01 15:49:34.137876] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.137882] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.137891] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:24.134 [2024-10-01 15:49:34.137896] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:24.135 [2024-10-01 15:49:34.137901] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:24.135 [2024-10-01 15:49:34.137905] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:24.135 [2024-10-01 15:49:34.137911] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:24.135 [2024-10-01 15:49:34.137916] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:24.135 [2024-10-01 15:49:34.137920] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:24.135 [2024-10-01 15:49:34.137935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:24.135 [2024-10-01 15:49:34.145868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:24.135 [2024-10-01 15:49:34.145880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:24.135 [2024-10-01 15:49:34.153867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:24.135 [2024-10-01 15:49:34.153879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:24.135 [2024-10-01 15:49:34.161868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:24.135 [2024-10-01 15:49:34.161880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:24.135 [2024-10-01 15:49:34.169866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:24.135 [2024-10-01 15:49:34.169881] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:24.135 [2024-10-01 15:49:34.169886] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:24.135 [2024-10-01 15:49:34.169889] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:24.135 [2024-10-01 15:49:34.169892] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:24.135 [2024-10-01 15:49:34.169895] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:24.135 [2024-10-01 15:49:34.169901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:24.135 [2024-10-01 15:49:34.169907] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:24.135 [2024-10-01 15:49:34.169911] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:24.135 [2024-10-01 15:49:34.169914] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:24.135 [2024-10-01 15:49:34.169919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:24.135 [2024-10-01 15:49:34.169925] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:24.135 [2024-10-01 15:49:34.169929] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:24.135 [2024-10-01 15:49:34.169932] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:24.135 [2024-10-01 15:49:34.169937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:24.135 [2024-10-01 15:49:34.169944] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:24.135 [2024-10-01 15:49:34.169947] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:24.135 [2024-10-01 15:49:34.169950] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:24.135 [2024-10-01 15:49:34.169956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:24.135 [2024-10-01 15:49:34.177868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:24.135 [2024-10-01 15:49:34.177880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:24.135 [2024-10-01 15:49:34.177890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:24.135 [2024-10-01 15:49:34.177896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:24.135 ===================================================== 00:15:24.135 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:24.135 ===================================================== 00:15:24.135 Controller Capabilities/Features 00:15:24.135 ================================ 00:15:24.135 Vendor ID: 4e58 00:15:24.135 Subsystem Vendor ID: 4e58 00:15:24.135 Serial Number: SPDK2 00:15:24.135 Model Number: SPDK bdev Controller 00:15:24.135 Firmware Version: 25.01 00:15:24.135 Recommended Arb Burst: 6 00:15:24.135 IEEE OUI Identifier: 8d 6b 50 00:15:24.135 Multi-path I/O 00:15:24.135 May have multiple subsystem ports: Yes 00:15:24.135 May have multiple controllers: Yes 00:15:24.135 Associated with SR-IOV VF: No 00:15:24.135 Max Data Transfer Size: 131072 00:15:24.135 Max Number of Namespaces: 32 00:15:24.135 Max Number of I/O Queues: 127 00:15:24.135 NVMe Specification Version (VS): 1.3 00:15:24.135 NVMe Specification Version (Identify): 1.3 00:15:24.135 Maximum Queue Entries: 256 00:15:24.135 Contiguous Queues Required: Yes 00:15:24.135 Arbitration Mechanisms Supported 00:15:24.135 Weighted Round Robin: Not Supported 00:15:24.135 Vendor Specific: Not Supported 00:15:24.135 Reset Timeout: 15000 ms 00:15:24.135 Doorbell Stride: 4 bytes 00:15:24.135 NVM Subsystem Reset: Not Supported 00:15:24.135 Command Sets Supported 00:15:24.135 NVM Command Set: Supported 00:15:24.135 Boot Partition: Not Supported 00:15:24.135 Memory Page Size Minimum: 4096 bytes 00:15:24.135 Memory Page Size Maximum: 4096 bytes 00:15:24.135 Persistent Memory Region: Not Supported 00:15:24.135 Optional Asynchronous Events Supported 00:15:24.135 Namespace Attribute Notices: Supported 00:15:24.135 Firmware Activation Notices: Not Supported 00:15:24.135 ANA Change Notices: Not Supported 00:15:24.135 PLE Aggregate Log Change Notices: Not Supported 00:15:24.135 LBA Status Info Alert Notices: Not Supported 00:15:24.135 EGE Aggregate Log Change Notices: Not Supported 00:15:24.135 Normal NVM Subsystem Shutdown event: Not Supported 00:15:24.135 Zone Descriptor Change Notices: Not Supported 00:15:24.135 Discovery Log Change Notices: Not Supported 00:15:24.135 Controller Attributes 00:15:24.135 128-bit Host Identifier: Supported 00:15:24.135 Non-Operational Permissive Mode: Not Supported 00:15:24.135 NVM Sets: Not Supported 00:15:24.135 Read Recovery Levels: Not Supported 00:15:24.135 Endurance Groups: Not Supported 00:15:24.135 Predictable Latency Mode: Not Supported 00:15:24.135 Traffic Based Keep ALive: Not Supported 00:15:24.135 Namespace Granularity: Not Supported 00:15:24.135 SQ Associations: Not Supported 00:15:24.135 UUID List: Not Supported 00:15:24.135 Multi-Domain Subsystem: Not Supported 00:15:24.135 Fixed Capacity Management: Not Supported 00:15:24.135 Variable Capacity Management: Not Supported 00:15:24.135 Delete Endurance Group: Not Supported 00:15:24.135 Delete NVM Set: Not Supported 00:15:24.135 Extended LBA Formats Supported: Not Supported 00:15:24.135 Flexible Data Placement Supported: Not Supported 00:15:24.135 00:15:24.135 Controller Memory Buffer Support 00:15:24.135 ================================ 00:15:24.135 Supported: No 00:15:24.135 00:15:24.135 Persistent Memory Region Support 00:15:24.135 ================================ 00:15:24.135 Supported: No 00:15:24.135 00:15:24.135 Admin Command Set Attributes 00:15:24.135 ============================ 00:15:24.135 Security Send/Receive: Not Supported 00:15:24.135 Format NVM: Not Supported 00:15:24.135 Firmware Activate/Download: Not Supported 00:15:24.135 Namespace Management: Not Supported 00:15:24.135 Device Self-Test: Not Supported 00:15:24.135 Directives: Not Supported 00:15:24.135 NVMe-MI: Not Supported 00:15:24.135 Virtualization Management: Not Supported 00:15:24.135 Doorbell Buffer Config: Not Supported 00:15:24.135 Get LBA Status Capability: Not Supported 00:15:24.135 Command & Feature Lockdown Capability: Not Supported 00:15:24.135 Abort Command Limit: 4 00:15:24.135 Async Event Request Limit: 4 00:15:24.135 Number of Firmware Slots: N/A 00:15:24.135 Firmware Slot 1 Read-Only: N/A 00:15:24.135 Firmware Activation Without Reset: N/A 00:15:24.135 Multiple Update Detection Support: N/A 00:15:24.135 Firmware Update Granularity: No Information Provided 00:15:24.135 Per-Namespace SMART Log: No 00:15:24.135 Asymmetric Namespace Access Log Page: Not Supported 00:15:24.135 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:24.135 Command Effects Log Page: Supported 00:15:24.135 Get Log Page Extended Data: Supported 00:15:24.135 Telemetry Log Pages: Not Supported 00:15:24.135 Persistent Event Log Pages: Not Supported 00:15:24.135 Supported Log Pages Log Page: May Support 00:15:24.135 Commands Supported & Effects Log Page: Not Supported 00:15:24.135 Feature Identifiers & Effects Log Page:May Support 00:15:24.135 NVMe-MI Commands & Effects Log Page: May Support 00:15:24.135 Data Area 4 for Telemetry Log: Not Supported 00:15:24.135 Error Log Page Entries Supported: 128 00:15:24.135 Keep Alive: Supported 00:15:24.135 Keep Alive Granularity: 10000 ms 00:15:24.135 00:15:24.135 NVM Command Set Attributes 00:15:24.135 ========================== 00:15:24.135 Submission Queue Entry Size 00:15:24.135 Max: 64 00:15:24.135 Min: 64 00:15:24.135 Completion Queue Entry Size 00:15:24.135 Max: 16 00:15:24.135 Min: 16 00:15:24.135 Number of Namespaces: 32 00:15:24.135 Compare Command: Supported 00:15:24.136 Write Uncorrectable Command: Not Supported 00:15:24.136 Dataset Management Command: Supported 00:15:24.136 Write Zeroes Command: Supported 00:15:24.136 Set Features Save Field: Not Supported 00:15:24.136 Reservations: Not Supported 00:15:24.136 Timestamp: Not Supported 00:15:24.136 Copy: Supported 00:15:24.136 Volatile Write Cache: Present 00:15:24.136 Atomic Write Unit (Normal): 1 00:15:24.136 Atomic Write Unit (PFail): 1 00:15:24.136 Atomic Compare & Write Unit: 1 00:15:24.136 Fused Compare & Write: Supported 00:15:24.136 Scatter-Gather List 00:15:24.136 SGL Command Set: Supported (Dword aligned) 00:15:24.136 SGL Keyed: Not Supported 00:15:24.136 SGL Bit Bucket Descriptor: Not Supported 00:15:24.136 SGL Metadata Pointer: Not Supported 00:15:24.136 Oversized SGL: Not Supported 00:15:24.136 SGL Metadata Address: Not Supported 00:15:24.136 SGL Offset: Not Supported 00:15:24.136 Transport SGL Data Block: Not Supported 00:15:24.136 Replay Protected Memory Block: Not Supported 00:15:24.136 00:15:24.136 Firmware Slot Information 00:15:24.136 ========================= 00:15:24.136 Active slot: 1 00:15:24.136 Slot 1 Firmware Revision: 25.01 00:15:24.136 00:15:24.136 00:15:24.136 Commands Supported and Effects 00:15:24.136 ============================== 00:15:24.136 Admin Commands 00:15:24.136 -------------- 00:15:24.136 Get Log Page (02h): Supported 00:15:24.136 Identify (06h): Supported 00:15:24.136 Abort (08h): Supported 00:15:24.136 Set Features (09h): Supported 00:15:24.136 Get Features (0Ah): Supported 00:15:24.136 Asynchronous Event Request (0Ch): Supported 00:15:24.136 Keep Alive (18h): Supported 00:15:24.136 I/O Commands 00:15:24.136 ------------ 00:15:24.136 Flush (00h): Supported LBA-Change 00:15:24.136 Write (01h): Supported LBA-Change 00:15:24.136 Read (02h): Supported 00:15:24.136 Compare (05h): Supported 00:15:24.136 Write Zeroes (08h): Supported LBA-Change 00:15:24.136 Dataset Management (09h): Supported LBA-Change 00:15:24.136 Copy (19h): Supported LBA-Change 00:15:24.136 00:15:24.136 Error Log 00:15:24.136 ========= 00:15:24.136 00:15:24.136 Arbitration 00:15:24.136 =========== 00:15:24.136 Arbitration Burst: 1 00:15:24.136 00:15:24.136 Power Management 00:15:24.136 ================ 00:15:24.136 Number of Power States: 1 00:15:24.136 Current Power State: Power State #0 00:15:24.136 Power State #0: 00:15:24.136 Max Power: 0.00 W 00:15:24.136 Non-Operational State: Operational 00:15:24.136 Entry Latency: Not Reported 00:15:24.136 Exit Latency: Not Reported 00:15:24.136 Relative Read Throughput: 0 00:15:24.136 Relative Read Latency: 0 00:15:24.136 Relative Write Throughput: 0 00:15:24.136 Relative Write Latency: 0 00:15:24.136 Idle Power: Not Reported 00:15:24.136 Active Power: Not Reported 00:15:24.136 Non-Operational Permissive Mode: Not Supported 00:15:24.136 00:15:24.136 Health Information 00:15:24.136 ================== 00:15:24.136 Critical Warnings: 00:15:24.136 Available Spare Space: OK 00:15:24.136 Temperature: OK 00:15:24.136 Device Reliability: OK 00:15:24.136 Read Only: No 00:15:24.136 Volatile Memory Backup: OK 00:15:24.136 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:24.136 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:24.136 Available Spare: 0% 00:15:24.136 Available Sp[2024-10-01 15:49:34.177978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:24.136 [2024-10-01 15:49:34.185867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:24.136 [2024-10-01 15:49:34.185897] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:24.136 [2024-10-01 15:49:34.185905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.136 [2024-10-01 15:49:34.185911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.136 [2024-10-01 15:49:34.185916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.136 [2024-10-01 15:49:34.185921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.136 [2024-10-01 15:49:34.185960] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:24.136 [2024-10-01 15:49:34.185969] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:24.136 [2024-10-01 15:49:34.186971] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.136 [2024-10-01 15:49:34.187014] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:24.136 [2024-10-01 15:49:34.187020] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:24.136 [2024-10-01 15:49:34.187977] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:24.136 [2024-10-01 15:49:34.187988] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:24.136 [2024-10-01 15:49:34.188040] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:24.136 [2024-10-01 15:49:34.188991] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:24.136 are Threshold: 0% 00:15:24.136 Life Percentage Used: 0% 00:15:24.136 Data Units Read: 0 00:15:24.136 Data Units Written: 0 00:15:24.136 Host Read Commands: 0 00:15:24.136 Host Write Commands: 0 00:15:24.136 Controller Busy Time: 0 minutes 00:15:24.136 Power Cycles: 0 00:15:24.136 Power On Hours: 0 hours 00:15:24.136 Unsafe Shutdowns: 0 00:15:24.136 Unrecoverable Media Errors: 0 00:15:24.136 Lifetime Error Log Entries: 0 00:15:24.136 Warning Temperature Time: 0 minutes 00:15:24.136 Critical Temperature Time: 0 minutes 00:15:24.136 00:15:24.136 Number of Queues 00:15:24.136 ================ 00:15:24.136 Number of I/O Submission Queues: 127 00:15:24.136 Number of I/O Completion Queues: 127 00:15:24.136 00:15:24.136 Active Namespaces 00:15:24.136 ================= 00:15:24.136 Namespace ID:1 00:15:24.136 Error Recovery Timeout: Unlimited 00:15:24.136 Command Set Identifier: NVM (00h) 00:15:24.136 Deallocate: Supported 00:15:24.136 Deallocated/Unwritten Error: Not Supported 00:15:24.136 Deallocated Read Value: Unknown 00:15:24.136 Deallocate in Write Zeroes: Not Supported 00:15:24.136 Deallocated Guard Field: 0xFFFF 00:15:24.136 Flush: Supported 00:15:24.136 Reservation: Supported 00:15:24.136 Namespace Sharing Capabilities: Multiple Controllers 00:15:24.136 Size (in LBAs): 131072 (0GiB) 00:15:24.136 Capacity (in LBAs): 131072 (0GiB) 00:15:24.136 Utilization (in LBAs): 131072 (0GiB) 00:15:24.136 NGUID: 0868AC3EFBE54BB5AA93EB421E7674E1 00:15:24.136 UUID: 0868ac3e-fbe5-4bb5-aa93-eb421e7674e1 00:15:24.136 Thin Provisioning: Not Supported 00:15:24.136 Per-NS Atomic Units: Yes 00:15:24.136 Atomic Boundary Size (Normal): 0 00:15:24.136 Atomic Boundary Size (PFail): 0 00:15:24.136 Atomic Boundary Offset: 0 00:15:24.136 Maximum Single Source Range Length: 65535 00:15:24.136 Maximum Copy Length: 65535 00:15:24.136 Maximum Source Range Count: 1 00:15:24.136 NGUID/EUI64 Never Reused: No 00:15:24.136 Namespace Write Protected: No 00:15:24.136 Number of LBA Formats: 1 00:15:24.136 Current LBA Format: LBA Format #00 00:15:24.136 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:24.136 00:15:24.136 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:24.395 [2024-10-01 15:49:34.410115] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.666 Initializing NVMe Controllers 00:15:29.666 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.666 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:29.666 Initialization complete. Launching workers. 00:15:29.666 ======================================================== 00:15:29.666 Latency(us) 00:15:29.666 Device Information : IOPS MiB/s Average min max 00:15:29.666 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39983.38 156.19 3201.73 952.23 8614.84 00:15:29.666 ======================================================== 00:15:29.666 Total : 39983.38 156.19 3201.73 952.23 8614.84 00:15:29.666 00:15:29.666 [2024-10-01 15:49:39.512098] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.666 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:29.666 [2024-10-01 15:49:39.729714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.965 Initializing NVMe Controllers 00:15:34.965 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.965 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:34.965 Initialization complete. Launching workers. 00:15:34.965 ======================================================== 00:15:34.965 Latency(us) 00:15:34.965 Device Information : IOPS MiB/s Average min max 00:15:34.965 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39949.40 156.05 3204.35 949.04 7441.98 00:15:34.965 ======================================================== 00:15:34.965 Total : 39949.40 156.05 3204.35 949.04 7441.98 00:15:34.965 00:15:34.965 [2024-10-01 15:49:44.754591] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.965 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:34.965 [2024-10-01 15:49:44.949265] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:40.234 [2024-10-01 15:49:50.080959] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.234 Initializing NVMe Controllers 00:15:40.234 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:40.234 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:40.234 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:40.234 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:40.234 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:40.234 Initialization complete. Launching workers. 00:15:40.234 Starting thread on core 2 00:15:40.234 Starting thread on core 3 00:15:40.234 Starting thread on core 1 00:15:40.234 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:40.234 [2024-10-01 15:49:50.355740] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.524 [2024-10-01 15:49:53.415081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.524 Initializing NVMe Controllers 00:15:43.524 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.524 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.524 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:43.524 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:43.524 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:43.524 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:43.524 Initialization complete. Launching workers. 00:15:43.524 Starting thread on core 1 with urgent priority queue 00:15:43.524 Starting thread on core 2 with urgent priority queue 00:15:43.524 Starting thread on core 3 with urgent priority queue 00:15:43.524 Starting thread on core 0 with urgent priority queue 00:15:43.524 SPDK bdev Controller (SPDK2 ) core 0: 10114.67 IO/s 9.89 secs/100000 ios 00:15:43.524 SPDK bdev Controller (SPDK2 ) core 1: 9437.00 IO/s 10.60 secs/100000 ios 00:15:43.524 SPDK bdev Controller (SPDK2 ) core 2: 6630.33 IO/s 15.08 secs/100000 ios 00:15:43.524 SPDK bdev Controller (SPDK2 ) core 3: 8112.33 IO/s 12.33 secs/100000 ios 00:15:43.524 ======================================================== 00:15:43.524 00:15:43.524 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:43.524 [2024-10-01 15:49:53.693301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.524 Initializing NVMe Controllers 00:15:43.524 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.524 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.524 Namespace ID: 1 size: 0GB 00:15:43.524 Initialization complete. 00:15:43.524 INFO: using host memory buffer for IO 00:15:43.524 Hello world! 00:15:43.524 [2024-10-01 15:49:53.703361] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.783 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:43.783 [2024-10-01 15:49:53.972602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:45.159 Initializing NVMe Controllers 00:15:45.159 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.159 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.159 Initialization complete. Launching workers. 00:15:45.159 submit (in ns) avg, min, max = 7419.2, 3194.3, 3999820.0 00:15:45.159 complete (in ns) avg, min, max = 18886.2, 1749.5, 4044058.1 00:15:45.159 00:15:45.159 Submit histogram 00:15:45.159 ================ 00:15:45.159 Range in us Cumulative Count 00:15:45.159 3.185 - 3.200: 0.0119% ( 2) 00:15:45.159 3.200 - 3.215: 0.2080% ( 33) 00:15:45.159 3.215 - 3.230: 1.7944% ( 267) 00:15:45.159 3.230 - 3.246: 5.5140% ( 626) 00:15:45.159 3.246 - 3.261: 11.1289% ( 945) 00:15:45.159 3.261 - 3.276: 17.1420% ( 1012) 00:15:45.159 3.276 - 3.291: 23.6661% ( 1098) 00:15:45.159 3.291 - 3.307: 29.7980% ( 1032) 00:15:45.159 3.307 - 3.322: 36.0250% ( 1048) 00:15:45.159 3.322 - 3.337: 41.6162% ( 941) 00:15:45.159 3.337 - 3.352: 47.3856% ( 971) 00:15:45.159 3.352 - 3.368: 52.4242% ( 848) 00:15:45.159 3.368 - 3.383: 58.7403% ( 1063) 00:15:45.159 3.383 - 3.398: 66.3458% ( 1280) 00:15:45.159 3.398 - 3.413: 72.2222% ( 989) 00:15:45.159 3.413 - 3.429: 77.3500% ( 863) 00:15:45.159 3.429 - 3.444: 81.4379% ( 688) 00:15:45.159 3.444 - 3.459: 84.2840% ( 479) 00:15:45.159 3.459 - 3.474: 86.2686% ( 334) 00:15:45.159 3.474 - 3.490: 87.3619% ( 184) 00:15:45.159 3.490 - 3.505: 87.9204% ( 94) 00:15:45.159 3.505 - 3.520: 88.3601% ( 74) 00:15:45.159 3.520 - 3.535: 88.9542% ( 100) 00:15:45.159 3.535 - 3.550: 89.6554% ( 118) 00:15:45.159 3.550 - 3.566: 90.4932% ( 141) 00:15:45.159 3.566 - 3.581: 91.4201% ( 156) 00:15:45.159 3.581 - 3.596: 92.3351% ( 154) 00:15:45.159 3.596 - 3.611: 93.2204% ( 149) 00:15:45.159 3.611 - 3.627: 94.0226% ( 135) 00:15:45.159 3.627 - 3.642: 95.0089% ( 166) 00:15:45.159 3.642 - 3.657: 95.9418% ( 157) 00:15:45.159 3.657 - 3.672: 96.6667% ( 122) 00:15:45.159 3.672 - 3.688: 97.4450% ( 131) 00:15:45.159 3.688 - 3.703: 98.0511% ( 102) 00:15:45.159 3.703 - 3.718: 98.4730% ( 71) 00:15:45.159 3.718 - 3.733: 98.8592% ( 65) 00:15:45.159 3.733 - 3.749: 99.1444% ( 48) 00:15:45.159 3.749 - 3.764: 99.3108% ( 28) 00:15:45.159 3.764 - 3.779: 99.4058% ( 16) 00:15:45.159 3.779 - 3.794: 99.5009% ( 16) 00:15:45.159 3.794 - 3.810: 99.5663% ( 11) 00:15:45.159 3.810 - 3.825: 99.6138% ( 8) 00:15:45.159 3.825 - 3.840: 99.6316% ( 3) 00:15:45.159 3.840 - 3.855: 99.6376% ( 1) 00:15:45.159 3.855 - 3.870: 99.6435% ( 1) 00:15:45.159 3.870 - 3.886: 99.6494% ( 1) 00:15:45.159 3.886 - 3.901: 99.6554% ( 1) 00:15:45.159 4.876 - 4.907: 99.6613% ( 1) 00:15:45.159 5.059 - 5.090: 99.6673% ( 1) 00:15:45.159 5.090 - 5.120: 99.6791% ( 2) 00:15:45.159 5.120 - 5.150: 99.6851% ( 1) 00:15:45.159 5.303 - 5.333: 99.6910% ( 1) 00:15:45.159 5.394 - 5.425: 99.6970% ( 1) 00:15:45.159 5.516 - 5.547: 99.7029% ( 1) 00:15:45.159 5.577 - 5.608: 99.7089% ( 1) 00:15:45.159 5.608 - 5.638: 99.7148% ( 1) 00:15:45.159 5.638 - 5.669: 99.7207% ( 1) 00:15:45.159 5.821 - 5.851: 99.7267% ( 1) 00:15:45.159 5.851 - 5.882: 99.7386% ( 2) 00:15:45.159 5.912 - 5.943: 99.7445% ( 1) 00:15:45.159 5.943 - 5.973: 99.7504% ( 1) 00:15:45.159 6.004 - 6.034: 99.7623% ( 2) 00:15:45.159 6.034 - 6.065: 99.7683% ( 1) 00:15:45.159 6.095 - 6.126: 99.7742% ( 1) 00:15:45.159 6.126 - 6.156: 99.7802% ( 1) 00:15:45.159 6.278 - 6.309: 99.7861% ( 1) 00:15:45.159 6.309 - 6.339: 99.7920% ( 1) 00:15:45.159 6.430 - 6.461: 99.7980% ( 1) 00:15:45.159 6.491 - 6.522: 99.8039% ( 1) 00:15:45.159 6.552 - 6.583: 99.8099% ( 1) 00:15:45.159 6.583 - 6.613: 99.8158% ( 1) 00:15:45.159 6.613 - 6.644: 99.8217% ( 1) 00:15:45.159 6.827 - 6.857: 99.8277% ( 1) 00:15:45.159 6.857 - 6.888: 99.8396% ( 2) 00:15:45.159 6.949 - 6.979: 99.8455% ( 1) 00:15:45.159 7.101 - 7.131: 99.8515% ( 1) 00:15:45.159 7.162 - 7.192: 99.8574% ( 1) 00:15:45.159 7.436 - 7.467: 99.8633% ( 1) 00:15:45.159 7.558 - 7.589: 99.8693% ( 1) 00:15:45.159 [2024-10-01 15:49:55.073824] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:45.159 7.924 - 7.985: 99.8812% ( 2) 00:15:45.159 8.290 - 8.350: 99.8871% ( 1) 00:15:45.159 8.472 - 8.533: 99.8930% ( 1) 00:15:45.159 8.777 - 8.838: 99.8990% ( 1) 00:15:45.159 3994.575 - 4025.783: 100.0000% ( 17) 00:15:45.159 00:15:45.159 Complete histogram 00:15:45.159 ================== 00:15:45.159 Range in us Cumulative Count 00:15:45.159 1.745 - 1.752: 0.0178% ( 3) 00:15:45.159 1.752 - 1.760: 0.3446% ( 55) 00:15:45.159 1.760 - 1.768: 2.2757% ( 325) 00:15:45.159 1.768 - 1.775: 4.8366% ( 431) 00:15:45.159 1.775 - 1.783: 6.4231% ( 267) 00:15:45.159 1.783 - 1.790: 7.3737% ( 160) 00:15:45.159 1.790 - 1.798: 9.0374% ( 280) 00:15:45.159 1.798 - 1.806: 18.9721% ( 1672) 00:15:45.159 1.806 - 1.813: 46.5538% ( 4642) 00:15:45.159 1.813 - 1.821: 74.9673% ( 4782) 00:15:45.159 1.821 - 1.829: 89.1741% ( 2391) 00:15:45.159 1.829 - 1.836: 93.5710% ( 740) 00:15:45.159 1.836 - 1.844: 95.8289% ( 380) 00:15:45.159 1.844 - 1.851: 97.0410% ( 204) 00:15:45.159 1.851 - 1.859: 97.5520% ( 86) 00:15:45.159 1.859 - 1.867: 97.8075% ( 43) 00:15:45.159 1.867 - 1.874: 98.0630% ( 43) 00:15:45.159 1.874 - 1.882: 98.4076% ( 58) 00:15:45.159 1.882 - 1.890: 98.7819% ( 63) 00:15:45.159 1.890 - 1.897: 99.0018% ( 37) 00:15:45.159 1.897 - 1.905: 99.2513% ( 42) 00:15:45.159 1.905 - 1.912: 99.3702% ( 20) 00:15:45.159 1.912 - 1.920: 99.3939% ( 4) 00:15:45.159 1.920 - 1.928: 99.4058% ( 2) 00:15:45.159 1.935 - 1.943: 99.4177% ( 2) 00:15:45.159 1.950 - 1.966: 99.4236% ( 1) 00:15:45.159 1.966 - 1.981: 99.4296% ( 1) 00:15:45.159 1.981 - 1.996: 99.4474% ( 3) 00:15:45.159 2.057 - 2.072: 99.4534% ( 1) 00:15:45.159 2.194 - 2.210: 99.4593% ( 1) 00:15:45.159 3.398 - 3.413: 99.4652% ( 1) 00:15:45.159 3.581 - 3.596: 99.4712% ( 1) 00:15:45.160 3.642 - 3.657: 99.4771% ( 1) 00:15:45.160 3.657 - 3.672: 99.4831% ( 1) 00:15:45.160 3.825 - 3.840: 99.4890% ( 1) 00:15:45.160 3.992 - 4.023: 99.4949% ( 1) 00:15:45.160 4.084 - 4.114: 99.5009% ( 1) 00:15:45.160 4.328 - 4.358: 99.5068% ( 1) 00:15:45.160 4.571 - 4.602: 99.5128% ( 1) 00:15:45.160 4.632 - 4.663: 99.5187% ( 1) 00:15:45.160 4.815 - 4.846: 99.5247% ( 1) 00:15:45.160 4.907 - 4.937: 99.5306% ( 1) 00:15:45.160 4.998 - 5.029: 99.5425% ( 2) 00:15:45.160 5.181 - 5.211: 99.5484% ( 1) 00:15:45.160 5.364 - 5.394: 99.5544% ( 1) 00:15:45.160 7.802 - 7.863: 99.5603% ( 1) 00:15:45.160 9.143 - 9.204: 99.5663% ( 1) 00:15:45.160 12.008 - 12.069: 99.5722% ( 1) 00:15:45.160 3635.688 - 3651.291: 99.5781% ( 1) 00:15:45.160 3978.971 - 3994.575: 99.5841% ( 1) 00:15:45.160 3994.575 - 4025.783: 99.9941% ( 69) 00:15:45.160 4025.783 - 4056.990: 100.0000% ( 1) 00:15:45.160 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:45.160 [ 00:15:45.160 { 00:15:45.160 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:45.160 "subtype": "Discovery", 00:15:45.160 "listen_addresses": [], 00:15:45.160 "allow_any_host": true, 00:15:45.160 "hosts": [] 00:15:45.160 }, 00:15:45.160 { 00:15:45.160 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:45.160 "subtype": "NVMe", 00:15:45.160 "listen_addresses": [ 00:15:45.160 { 00:15:45.160 "trtype": "VFIOUSER", 00:15:45.160 "adrfam": "IPv4", 00:15:45.160 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:45.160 "trsvcid": "0" 00:15:45.160 } 00:15:45.160 ], 00:15:45.160 "allow_any_host": true, 00:15:45.160 "hosts": [], 00:15:45.160 "serial_number": "SPDK1", 00:15:45.160 "model_number": "SPDK bdev Controller", 00:15:45.160 "max_namespaces": 32, 00:15:45.160 "min_cntlid": 1, 00:15:45.160 "max_cntlid": 65519, 00:15:45.160 "namespaces": [ 00:15:45.160 { 00:15:45.160 "nsid": 1, 00:15:45.160 "bdev_name": "Malloc1", 00:15:45.160 "name": "Malloc1", 00:15:45.160 "nguid": "1BB3F07AD4EB42649F641D0A57409C4E", 00:15:45.160 "uuid": "1bb3f07a-d4eb-4264-9f64-1d0a57409c4e" 00:15:45.160 }, 00:15:45.160 { 00:15:45.160 "nsid": 2, 00:15:45.160 "bdev_name": "Malloc3", 00:15:45.160 "name": "Malloc3", 00:15:45.160 "nguid": "92B4FFBC39384518A7E3E866A7B13ED6", 00:15:45.160 "uuid": "92b4ffbc-3938-4518-a7e3-e866a7b13ed6" 00:15:45.160 } 00:15:45.160 ] 00:15:45.160 }, 00:15:45.160 { 00:15:45.160 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:45.160 "subtype": "NVMe", 00:15:45.160 "listen_addresses": [ 00:15:45.160 { 00:15:45.160 "trtype": "VFIOUSER", 00:15:45.160 "adrfam": "IPv4", 00:15:45.160 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:45.160 "trsvcid": "0" 00:15:45.160 } 00:15:45.160 ], 00:15:45.160 "allow_any_host": true, 00:15:45.160 "hosts": [], 00:15:45.160 "serial_number": "SPDK2", 00:15:45.160 "model_number": "SPDK bdev Controller", 00:15:45.160 "max_namespaces": 32, 00:15:45.160 "min_cntlid": 1, 00:15:45.160 "max_cntlid": 65519, 00:15:45.160 "namespaces": [ 00:15:45.160 { 00:15:45.160 "nsid": 1, 00:15:45.160 "bdev_name": "Malloc2", 00:15:45.160 "name": "Malloc2", 00:15:45.160 "nguid": "0868AC3EFBE54BB5AA93EB421E7674E1", 00:15:45.160 "uuid": "0868ac3e-fbe5-4bb5-aa93-eb421e7674e1" 00:15:45.160 } 00:15:45.160 ] 00:15:45.160 } 00:15:45.160 ] 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2406218 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:45.160 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:45.419 [2024-10-01 15:49:55.467315] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:45.419 Malloc4 00:15:45.419 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:45.678 [2024-10-01 15:49:55.716134] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:45.678 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:45.678 Asynchronous Event Request test 00:15:45.678 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.678 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.678 Registering asynchronous event callbacks... 00:15:45.678 Starting namespace attribute notice tests for all controllers... 00:15:45.678 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:45.678 aer_cb - Changed Namespace 00:15:45.678 Cleaning up... 00:15:45.937 [ 00:15:45.937 { 00:15:45.937 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:45.937 "subtype": "Discovery", 00:15:45.937 "listen_addresses": [], 00:15:45.937 "allow_any_host": true, 00:15:45.937 "hosts": [] 00:15:45.937 }, 00:15:45.937 { 00:15:45.937 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:45.937 "subtype": "NVMe", 00:15:45.937 "listen_addresses": [ 00:15:45.937 { 00:15:45.937 "trtype": "VFIOUSER", 00:15:45.937 "adrfam": "IPv4", 00:15:45.937 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:45.937 "trsvcid": "0" 00:15:45.937 } 00:15:45.937 ], 00:15:45.937 "allow_any_host": true, 00:15:45.937 "hosts": [], 00:15:45.937 "serial_number": "SPDK1", 00:15:45.937 "model_number": "SPDK bdev Controller", 00:15:45.937 "max_namespaces": 32, 00:15:45.937 "min_cntlid": 1, 00:15:45.937 "max_cntlid": 65519, 00:15:45.937 "namespaces": [ 00:15:45.937 { 00:15:45.937 "nsid": 1, 00:15:45.937 "bdev_name": "Malloc1", 00:15:45.937 "name": "Malloc1", 00:15:45.937 "nguid": "1BB3F07AD4EB42649F641D0A57409C4E", 00:15:45.937 "uuid": "1bb3f07a-d4eb-4264-9f64-1d0a57409c4e" 00:15:45.937 }, 00:15:45.937 { 00:15:45.937 "nsid": 2, 00:15:45.937 "bdev_name": "Malloc3", 00:15:45.937 "name": "Malloc3", 00:15:45.937 "nguid": "92B4FFBC39384518A7E3E866A7B13ED6", 00:15:45.937 "uuid": "92b4ffbc-3938-4518-a7e3-e866a7b13ed6" 00:15:45.937 } 00:15:45.937 ] 00:15:45.937 }, 00:15:45.937 { 00:15:45.937 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:45.937 "subtype": "NVMe", 00:15:45.937 "listen_addresses": [ 00:15:45.937 { 00:15:45.937 "trtype": "VFIOUSER", 00:15:45.937 "adrfam": "IPv4", 00:15:45.937 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:45.937 "trsvcid": "0" 00:15:45.937 } 00:15:45.937 ], 00:15:45.937 "allow_any_host": true, 00:15:45.937 "hosts": [], 00:15:45.937 "serial_number": "SPDK2", 00:15:45.937 "model_number": "SPDK bdev Controller", 00:15:45.937 "max_namespaces": 32, 00:15:45.937 "min_cntlid": 1, 00:15:45.937 "max_cntlid": 65519, 00:15:45.937 "namespaces": [ 00:15:45.937 { 00:15:45.937 "nsid": 1, 00:15:45.937 "bdev_name": "Malloc2", 00:15:45.937 "name": "Malloc2", 00:15:45.937 "nguid": "0868AC3EFBE54BB5AA93EB421E7674E1", 00:15:45.937 "uuid": "0868ac3e-fbe5-4bb5-aa93-eb421e7674e1" 00:15:45.937 }, 00:15:45.937 { 00:15:45.937 "nsid": 2, 00:15:45.937 "bdev_name": "Malloc4", 00:15:45.937 "name": "Malloc4", 00:15:45.937 "nguid": "E6F96C6BB1AE48AD9986BAD7CDEA249A", 00:15:45.937 "uuid": "e6f96c6b-b1ae-48ad-9986-bad7cdea249a" 00:15:45.937 } 00:15:45.937 ] 00:15:45.937 } 00:15:45.937 ] 00:15:45.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2406218 00:15:45.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:45.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2398537 00:15:45.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2398537 ']' 00:15:45.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2398537 00:15:45.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:45.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:45.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2398537 00:15:45.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:45.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:45.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2398537' 00:15:45.937 killing process with pid 2398537 00:15:45.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2398537 00:15:45.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2398537 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2406400 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2406400' 00:15:46.196 Process pid: 2406400 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2406400 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2406400 ']' 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:46.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:46.196 [2024-10-01 15:49:56.312071] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:46.196 [2024-10-01 15:49:56.312997] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:15:46.196 [2024-10-01 15:49:56.313036] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.196 [2024-10-01 15:49:56.378856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.455 [2024-10-01 15:49:56.447090] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.455 [2024-10-01 15:49:56.447132] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.455 [2024-10-01 15:49:56.447138] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.455 [2024-10-01 15:49:56.447144] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.455 [2024-10-01 15:49:56.447149] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.455 [2024-10-01 15:49:56.447272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.455 [2024-10-01 15:49:56.447387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.455 [2024-10-01 15:49:56.447503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.455 [2024-10-01 15:49:56.447505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.455 [2024-10-01 15:49:56.532626] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:46.455 [2024-10-01 15:49:56.532671] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:46.455 [2024-10-01 15:49:56.533840] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:46.455 [2024-10-01 15:49:56.533912] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:46.455 [2024-10-01 15:49:56.533959] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:47.022 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:47.022 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:47.022 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:48.398 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:48.398 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:48.398 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:48.398 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:48.398 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:48.398 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:48.398 Malloc1 00:15:48.399 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:48.657 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:48.916 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:49.175 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:49.175 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:49.175 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:49.433 Malloc2 00:15:49.433 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:49.433 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:49.691 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:49.950 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:49.950 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2406400 00:15:49.950 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2406400 ']' 00:15:49.950 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2406400 00:15:49.950 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:49.950 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:49.950 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2406400 00:15:49.950 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:49.950 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:49.950 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2406400' 00:15:49.950 killing process with pid 2406400 00:15:49.950 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2406400 00:15:49.950 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2406400 00:15:50.208 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:50.208 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:50.208 00:15:50.208 real 0m51.896s 00:15:50.208 user 3m18.439s 00:15:50.208 sys 0m3.319s 00:15:50.208 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:50.208 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:50.208 ************************************ 00:15:50.208 END TEST nvmf_vfio_user 00:15:50.208 ************************************ 00:15:50.208 15:50:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:50.208 15:50:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:50.208 15:50:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:50.208 15:50:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:50.208 ************************************ 00:15:50.208 START TEST nvmf_vfio_user_nvme_compliance 00:15:50.208 ************************************ 00:15:50.208 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:50.467 * Looking for test storage... 00:15:50.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:50.467 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:50.467 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:15:50.467 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:50.467 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:50.467 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:50.467 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:50.467 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:50.467 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:50.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.468 --rc genhtml_branch_coverage=1 00:15:50.468 --rc genhtml_function_coverage=1 00:15:50.468 --rc genhtml_legend=1 00:15:50.468 --rc geninfo_all_blocks=1 00:15:50.468 --rc geninfo_unexecuted_blocks=1 00:15:50.468 00:15:50.468 ' 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:50.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.468 --rc genhtml_branch_coverage=1 00:15:50.468 --rc genhtml_function_coverage=1 00:15:50.468 --rc genhtml_legend=1 00:15:50.468 --rc geninfo_all_blocks=1 00:15:50.468 --rc geninfo_unexecuted_blocks=1 00:15:50.468 00:15:50.468 ' 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:50.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.468 --rc genhtml_branch_coverage=1 00:15:50.468 --rc genhtml_function_coverage=1 00:15:50.468 --rc genhtml_legend=1 00:15:50.468 --rc geninfo_all_blocks=1 00:15:50.468 --rc geninfo_unexecuted_blocks=1 00:15:50.468 00:15:50.468 ' 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:50.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.468 --rc genhtml_branch_coverage=1 00:15:50.468 --rc genhtml_function_coverage=1 00:15:50.468 --rc genhtml_legend=1 00:15:50.468 --rc geninfo_all_blocks=1 00:15:50.468 --rc geninfo_unexecuted_blocks=1 00:15:50.468 00:15:50.468 ' 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:50.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.468 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:50.469 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:50.469 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:50.469 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2407170 00:15:50.469 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2407170' 00:15:50.469 Process pid: 2407170 00:15:50.469 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:50.469 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:50.469 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2407170 00:15:50.469 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2407170 ']' 00:15:50.469 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.469 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:50.469 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.469 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:50.469 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.469 [2024-10-01 15:50:00.618274] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:15:50.469 [2024-10-01 15:50:00.618325] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.727 [2024-10-01 15:50:00.688342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:50.727 [2024-10-01 15:50:00.761514] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.727 [2024-10-01 15:50:00.761556] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.727 [2024-10-01 15:50:00.761566] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.727 [2024-10-01 15:50:00.761572] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.727 [2024-10-01 15:50:00.761577] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.727 [2024-10-01 15:50:00.761638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.727 [2024-10-01 15:50:00.761676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.727 [2024-10-01 15:50:00.761676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.294 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.294 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:51.294 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.670 malloc0 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.670 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:52.670 00:15:52.670 00:15:52.670 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.670 http://cunit.sourceforge.net/ 00:15:52.670 00:15:52.670 00:15:52.670 Suite: nvme_compliance 00:15:52.670 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-01 15:50:02.673363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.670 [2024-10-01 15:50:02.674701] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:52.670 [2024-10-01 15:50:02.674717] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:52.670 [2024-10-01 15:50:02.674723] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:52.670 [2024-10-01 15:50:02.676387] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.670 passed 00:15:52.670 Test: admin_identify_ctrlr_verify_fused ...[2024-10-01 15:50:02.751907] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.670 [2024-10-01 15:50:02.754923] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.670 passed 00:15:52.670 Test: admin_identify_ns ...[2024-10-01 15:50:02.834082] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.929 [2024-10-01 15:50:02.897876] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:52.929 [2024-10-01 15:50:02.905874] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:52.929 [2024-10-01 15:50:02.926967] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.929 passed 00:15:52.929 Test: admin_get_features_mandatory_features ...[2024-10-01 15:50:03.000599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.929 [2024-10-01 15:50:03.003623] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.929 passed 00:15:52.929 Test: admin_get_features_optional_features ...[2024-10-01 15:50:03.077111] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.929 [2024-10-01 15:50:03.080133] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.929 passed 00:15:53.187 Test: admin_set_features_number_of_queues ...[2024-10-01 15:50:03.155200] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.187 [2024-10-01 15:50:03.262949] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.187 passed 00:15:53.187 Test: admin_get_log_page_mandatory_logs ...[2024-10-01 15:50:03.335425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.187 [2024-10-01 15:50:03.338441] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.187 passed 00:15:53.445 Test: admin_get_log_page_with_lpo ...[2024-10-01 15:50:03.416078] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.445 [2024-10-01 15:50:03.484875] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:53.445 [2024-10-01 15:50:03.497929] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.445 passed 00:15:53.445 Test: fabric_property_get ...[2024-10-01 15:50:03.571474] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.445 [2024-10-01 15:50:03.572704] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:53.445 [2024-10-01 15:50:03.574495] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.445 passed 00:15:53.703 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-01 15:50:03.653002] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.704 [2024-10-01 15:50:03.654224] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:53.704 [2024-10-01 15:50:03.656024] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.704 passed 00:15:53.704 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-01 15:50:03.731771] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.704 [2024-10-01 15:50:03.810872] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:53.704 [2024-10-01 15:50:03.826871] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:53.704 [2024-10-01 15:50:03.831959] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.704 passed 00:15:53.962 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-01 15:50:03.907696] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.962 [2024-10-01 15:50:03.908936] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:53.962 [2024-10-01 15:50:03.910715] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.962 passed 00:15:53.962 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-01 15:50:03.988457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.962 [2024-10-01 15:50:04.064875] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:53.962 [2024-10-01 15:50:04.088874] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:53.962 [2024-10-01 15:50:04.093949] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.962 passed 00:15:54.220 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-01 15:50:04.166526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.220 [2024-10-01 15:50:04.167755] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:54.220 [2024-10-01 15:50:04.167776] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:54.220 [2024-10-01 15:50:04.170545] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.220 passed 00:15:54.220 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-01 15:50:04.247569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.220 [2024-10-01 15:50:04.338872] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:54.220 [2024-10-01 15:50:04.346874] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:54.220 [2024-10-01 15:50:04.354871] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:54.220 [2024-10-01 15:50:04.362871] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:54.220 [2024-10-01 15:50:04.391938] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.478 passed 00:15:54.478 Test: admin_create_io_sq_verify_pc ...[2024-10-01 15:50:04.467480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.478 [2024-10-01 15:50:04.483876] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:54.478 [2024-10-01 15:50:04.501655] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.478 passed 00:15:54.478 Test: admin_create_io_qp_max_qps ...[2024-10-01 15:50:04.577171] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.853 [2024-10-01 15:50:05.693874] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:56.111 [2024-10-01 15:50:06.077165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:56.111 passed 00:15:56.111 Test: admin_create_io_sq_shared_cq ...[2024-10-01 15:50:06.150931] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:56.111 [2024-10-01 15:50:06.283873] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:56.370 [2024-10-01 15:50:06.320932] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:56.370 passed 00:15:56.370 00:15:56.370 Run Summary: Type Total Ran Passed Failed Inactive 00:15:56.370 suites 1 1 n/a 0 0 00:15:56.370 tests 18 18 18 0 0 00:15:56.370 asserts 360 360 360 0 n/a 00:15:56.370 00:15:56.370 Elapsed time = 1.500 seconds 00:15:56.370 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2407170 00:15:56.370 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2407170 ']' 00:15:56.370 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2407170 00:15:56.370 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:56.370 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.370 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2407170 00:15:56.370 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:56.370 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:56.370 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2407170' 00:15:56.370 killing process with pid 2407170 00:15:56.370 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2407170 00:15:56.370 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2407170 00:15:56.629 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:56.629 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:56.629 00:15:56.629 real 0m6.270s 00:15:56.629 user 0m17.695s 00:15:56.629 sys 0m0.576s 00:15:56.629 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.629 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:56.630 ************************************ 00:15:56.630 END TEST nvmf_vfio_user_nvme_compliance 00:15:56.630 ************************************ 00:15:56.630 15:50:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:56.630 15:50:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:56.630 15:50:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.630 15:50:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:56.630 ************************************ 00:15:56.630 START TEST nvmf_vfio_user_fuzz 00:15:56.630 ************************************ 00:15:56.630 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:56.630 * Looking for test storage... 00:15:56.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:56.630 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:56.630 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:15:56.630 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:56.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.890 --rc genhtml_branch_coverage=1 00:15:56.890 --rc genhtml_function_coverage=1 00:15:56.890 --rc genhtml_legend=1 00:15:56.890 --rc geninfo_all_blocks=1 00:15:56.890 --rc geninfo_unexecuted_blocks=1 00:15:56.890 00:15:56.890 ' 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:56.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.890 --rc genhtml_branch_coverage=1 00:15:56.890 --rc genhtml_function_coverage=1 00:15:56.890 --rc genhtml_legend=1 00:15:56.890 --rc geninfo_all_blocks=1 00:15:56.890 --rc geninfo_unexecuted_blocks=1 00:15:56.890 00:15:56.890 ' 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:56.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.890 --rc genhtml_branch_coverage=1 00:15:56.890 --rc genhtml_function_coverage=1 00:15:56.890 --rc genhtml_legend=1 00:15:56.890 --rc geninfo_all_blocks=1 00:15:56.890 --rc geninfo_unexecuted_blocks=1 00:15:56.890 00:15:56.890 ' 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:56.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.890 --rc genhtml_branch_coverage=1 00:15:56.890 --rc genhtml_function_coverage=1 00:15:56.890 --rc genhtml_legend=1 00:15:56.890 --rc geninfo_all_blocks=1 00:15:56.890 --rc geninfo_unexecuted_blocks=1 00:15:56.890 00:15:56.890 ' 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:56.890 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:56.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2408376 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2408376' 00:15:56.891 Process pid: 2408376 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2408376 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2408376 ']' 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.891 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.827 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.828 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:57.828 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.766 malloc0 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:58.766 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:30.848 Fuzzing completed. Shutting down the fuzz application 00:16:30.848 00:16:30.848 Dumping successful admin opcodes: 00:16:30.848 8, 9, 10, 24, 00:16:30.848 Dumping successful io opcodes: 00:16:30.848 0, 00:16:30.848 NS: 0x200003a1ef00 I/O qp, Total commands completed: 984915, total successful commands: 3862, random_seed: 2369382656 00:16:30.848 NS: 0x200003a1ef00 admin qp, Total commands completed: 241161, total successful commands: 1936, random_seed: 1457733504 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2408376 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2408376 ']' 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2408376 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2408376 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2408376' 00:16:30.848 killing process with pid 2408376 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2408376 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2408376 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:30.848 00:16:30.848 real 0m32.928s 00:16:30.848 user 0m30.736s 00:16:30.848 sys 0m31.009s 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:30.848 ************************************ 00:16:30.848 END TEST nvmf_vfio_user_fuzz 00:16:30.848 ************************************ 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.848 ************************************ 00:16:30.848 START TEST nvmf_auth_target 00:16:30.848 ************************************ 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:30.848 * Looking for test storage... 00:16:30.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:16:30.848 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:30.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.849 --rc genhtml_branch_coverage=1 00:16:30.849 --rc genhtml_function_coverage=1 00:16:30.849 --rc genhtml_legend=1 00:16:30.849 --rc geninfo_all_blocks=1 00:16:30.849 --rc geninfo_unexecuted_blocks=1 00:16:30.849 00:16:30.849 ' 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:30.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.849 --rc genhtml_branch_coverage=1 00:16:30.849 --rc genhtml_function_coverage=1 00:16:30.849 --rc genhtml_legend=1 00:16:30.849 --rc geninfo_all_blocks=1 00:16:30.849 --rc geninfo_unexecuted_blocks=1 00:16:30.849 00:16:30.849 ' 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:30.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.849 --rc genhtml_branch_coverage=1 00:16:30.849 --rc genhtml_function_coverage=1 00:16:30.849 --rc genhtml_legend=1 00:16:30.849 --rc geninfo_all_blocks=1 00:16:30.849 --rc geninfo_unexecuted_blocks=1 00:16:30.849 00:16:30.849 ' 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:30.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.849 --rc genhtml_branch_coverage=1 00:16:30.849 --rc genhtml_function_coverage=1 00:16:30.849 --rc genhtml_legend=1 00:16:30.849 --rc geninfo_all_blocks=1 00:16:30.849 --rc geninfo_unexecuted_blocks=1 00:16:30.849 00:16:30.849 ' 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:30.849 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:30.850 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:36.127 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:36.127 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:36.127 Found net devices under 0000:86:00.0: cvl_0_0 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:36.127 Found net devices under 0000:86:00.1: cvl_0_1 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.127 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:36.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:16:36.128 00:16:36.128 --- 10.0.0.2 ping statistics --- 00:16:36.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.128 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:16:36.128 00:16:36.128 --- 10.0.0.1 ping statistics --- 00:16:36.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.128 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=2416800 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 2416800 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2416800 ']' 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:36.128 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2416934 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=8276c67f6dcd5474cce577fa8518dd867b0a6902de423001 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.pHb 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 8276c67f6dcd5474cce577fa8518dd867b0a6902de423001 0 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 8276c67f6dcd5474cce577fa8518dd867b0a6902de423001 0 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=8276c67f6dcd5474cce577fa8518dd867b0a6902de423001 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.pHb 00:16:36.695 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.pHb 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.pHb 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=24610a4749b0bab4dc98eb8363c1aafdfc8631c819c44e6562c0531cd76d2079 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.LtL 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 24610a4749b0bab4dc98eb8363c1aafdfc8631c819c44e6562c0531cd76d2079 3 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 24610a4749b0bab4dc98eb8363c1aafdfc8631c819c44e6562c0531cd76d2079 3 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=24610a4749b0bab4dc98eb8363c1aafdfc8631c819c44e6562c0531cd76d2079 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:16:36.696 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.LtL 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.LtL 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.LtL 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=388703037a93b57baa4b16c23cfb3bb0 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.ZRl 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 388703037a93b57baa4b16c23cfb3bb0 1 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 388703037a93b57baa4b16c23cfb3bb0 1 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=388703037a93b57baa4b16c23cfb3bb0 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.ZRl 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.ZRl 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.ZRl 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=9b690ed1bddb72d3c19f8aa810efaad911464920adeb013f 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.1pq 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 9b690ed1bddb72d3c19f8aa810efaad911464920adeb013f 2 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 9b690ed1bddb72d3c19f8aa810efaad911464920adeb013f 2 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=9b690ed1bddb72d3c19f8aa810efaad911464920adeb013f 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:16:36.955 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.1pq 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.1pq 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.1pq 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=5b270c371251ca0536c0c9c845fd3fb00272686d34adf2c2 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.Okc 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 5b270c371251ca0536c0c9c845fd3fb00272686d34adf2c2 2 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 5b270c371251ca0536c0c9c845fd3fb00272686d34adf2c2 2 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=5b270c371251ca0536c0c9c845fd3fb00272686d34adf2c2 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.Okc 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.Okc 00:16:36.955 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Okc 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=f8dc3e1172bbf6cb6cd0211acd3c88f4 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.nWL 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key f8dc3e1172bbf6cb6cd0211acd3c88f4 1 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 f8dc3e1172bbf6cb6cd0211acd3c88f4 1 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=f8dc3e1172bbf6cb6cd0211acd3c88f4 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.nWL 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.nWL 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.nWL 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=58b70b913d5dd48a3c7781f5cad1cb9ef529139a64e4f3d07eee925aedeb7df2 00:16:36.956 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.wmR 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 58b70b913d5dd48a3c7781f5cad1cb9ef529139a64e4f3d07eee925aedeb7df2 3 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 58b70b913d5dd48a3c7781f5cad1cb9ef529139a64e4f3d07eee925aedeb7df2 3 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=58b70b913d5dd48a3c7781f5cad1cb9ef529139a64e4f3d07eee925aedeb7df2 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.wmR 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.wmR 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.wmR 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2416800 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2416800 ']' 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2416934 /var/tmp/host.sock 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2416934 ']' 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:37.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:37.251 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.551 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.551 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:37.551 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:37.551 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.551 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.551 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.551 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:37.551 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pHb 00:16:37.551 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.551 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.551 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.551 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.pHb 00:16:37.551 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.pHb 00:16:37.903 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.LtL ]] 00:16:37.903 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LtL 00:16:37.903 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.903 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.903 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.903 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LtL 00:16:37.903 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LtL 00:16:37.903 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:37.903 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ZRl 00:16:37.903 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.903 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.903 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.903 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ZRl 00:16:37.903 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ZRl 00:16:38.162 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.1pq ]] 00:16:38.162 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1pq 00:16:38.162 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.162 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.162 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.162 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1pq 00:16:38.162 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1pq 00:16:38.421 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:38.421 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Okc 00:16:38.421 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.421 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.421 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.421 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Okc 00:16:38.421 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Okc 00:16:38.679 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.nWL ]] 00:16:38.679 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nWL 00:16:38.679 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.679 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.679 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.679 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nWL 00:16:38.679 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nWL 00:16:38.679 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:38.679 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.wmR 00:16:38.679 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.679 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.679 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.679 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.wmR 00:16:38.679 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.wmR 00:16:38.938 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:38.938 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:38.938 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.938 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.938 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:38.938 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:39.197 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:39.197 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.197 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.197 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:39.197 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.197 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.197 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.197 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.197 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.197 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.197 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.197 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.197 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.455 00:16:39.455 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.455 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.455 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.714 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.714 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.714 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.714 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.714 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.714 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.714 { 00:16:39.714 "cntlid": 1, 00:16:39.714 "qid": 0, 00:16:39.714 "state": "enabled", 00:16:39.714 "thread": "nvmf_tgt_poll_group_000", 00:16:39.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:39.714 "listen_address": { 00:16:39.714 "trtype": "TCP", 00:16:39.714 "adrfam": "IPv4", 00:16:39.714 "traddr": "10.0.0.2", 00:16:39.714 "trsvcid": "4420" 00:16:39.714 }, 00:16:39.714 "peer_address": { 00:16:39.714 "trtype": "TCP", 00:16:39.714 "adrfam": "IPv4", 00:16:39.714 "traddr": "10.0.0.1", 00:16:39.714 "trsvcid": "33202" 00:16:39.714 }, 00:16:39.714 "auth": { 00:16:39.714 "state": "completed", 00:16:39.714 "digest": "sha256", 00:16:39.714 "dhgroup": "null" 00:16:39.714 } 00:16:39.714 } 00:16:39.714 ]' 00:16:39.714 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.714 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.714 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.714 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:39.714 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.714 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.714 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.714 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.972 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:16:39.972 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:16:40.538 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.538 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:40.538 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.538 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.538 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.538 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.538 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.538 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.796 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:40.796 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.796 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.796 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:40.796 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.796 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.796 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.796 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.796 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.796 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.796 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.796 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.796 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.054 00:16:41.054 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.054 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.055 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.313 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.313 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.313 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.313 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.313 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.313 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.313 { 00:16:41.313 "cntlid": 3, 00:16:41.313 "qid": 0, 00:16:41.313 "state": "enabled", 00:16:41.313 "thread": "nvmf_tgt_poll_group_000", 00:16:41.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:41.313 "listen_address": { 00:16:41.313 "trtype": "TCP", 00:16:41.313 "adrfam": "IPv4", 00:16:41.313 "traddr": "10.0.0.2", 00:16:41.313 "trsvcid": "4420" 00:16:41.313 }, 00:16:41.313 "peer_address": { 00:16:41.313 "trtype": "TCP", 00:16:41.313 "adrfam": "IPv4", 00:16:41.313 "traddr": "10.0.0.1", 00:16:41.313 "trsvcid": "33230" 00:16:41.313 }, 00:16:41.313 "auth": { 00:16:41.313 "state": "completed", 00:16:41.313 "digest": "sha256", 00:16:41.313 "dhgroup": "null" 00:16:41.313 } 00:16:41.313 } 00:16:41.313 ]' 00:16:41.313 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.313 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.313 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.313 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:41.313 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.313 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.313 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.313 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.572 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:16:41.572 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:16:42.139 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.139 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:42.139 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.139 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.139 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.139 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.139 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:42.139 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:42.398 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:42.398 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.398 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.398 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:42.398 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.398 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.398 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.398 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.398 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.398 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.398 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.398 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.398 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.657 00:16:42.657 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.657 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.657 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.657 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.657 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.657 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.657 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.916 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.916 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.916 { 00:16:42.916 "cntlid": 5, 00:16:42.916 "qid": 0, 00:16:42.916 "state": "enabled", 00:16:42.916 "thread": "nvmf_tgt_poll_group_000", 00:16:42.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:42.916 "listen_address": { 00:16:42.916 "trtype": "TCP", 00:16:42.916 "adrfam": "IPv4", 00:16:42.916 "traddr": "10.0.0.2", 00:16:42.916 "trsvcid": "4420" 00:16:42.916 }, 00:16:42.916 "peer_address": { 00:16:42.916 "trtype": "TCP", 00:16:42.916 "adrfam": "IPv4", 00:16:42.916 "traddr": "10.0.0.1", 00:16:42.916 "trsvcid": "33250" 00:16:42.916 }, 00:16:42.916 "auth": { 00:16:42.916 "state": "completed", 00:16:42.916 "digest": "sha256", 00:16:42.916 "dhgroup": "null" 00:16:42.916 } 00:16:42.916 } 00:16:42.916 ]' 00:16:42.916 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.916 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.916 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.916 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:42.916 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.916 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.916 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.916 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.174 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:16:43.174 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:16:43.741 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.741 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:43.741 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.741 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.741 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.741 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.741 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:43.741 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:44.000 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:44.000 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.000 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.000 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:44.000 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.000 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.000 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:44.000 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.000 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.000 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.000 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.000 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.000 15:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.000 00:16:44.259 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.259 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.259 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.259 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.259 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.259 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.259 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.259 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.259 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.259 { 00:16:44.259 "cntlid": 7, 00:16:44.259 "qid": 0, 00:16:44.259 "state": "enabled", 00:16:44.259 "thread": "nvmf_tgt_poll_group_000", 00:16:44.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:44.259 "listen_address": { 00:16:44.259 "trtype": "TCP", 00:16:44.259 "adrfam": "IPv4", 00:16:44.259 "traddr": "10.0.0.2", 00:16:44.259 "trsvcid": "4420" 00:16:44.259 }, 00:16:44.259 "peer_address": { 00:16:44.259 "trtype": "TCP", 00:16:44.259 "adrfam": "IPv4", 00:16:44.259 "traddr": "10.0.0.1", 00:16:44.259 "trsvcid": "33286" 00:16:44.259 }, 00:16:44.259 "auth": { 00:16:44.259 "state": "completed", 00:16:44.259 "digest": "sha256", 00:16:44.259 "dhgroup": "null" 00:16:44.259 } 00:16:44.259 } 00:16:44.259 ]' 00:16:44.259 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.259 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.259 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.516 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:44.516 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.516 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.516 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.516 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.774 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:16:44.774 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.340 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.341 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.341 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.341 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.341 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.341 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.341 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.341 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.599 00:16:45.599 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.599 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.599 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.857 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.857 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.857 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.857 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.857 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.857 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.857 { 00:16:45.857 "cntlid": 9, 00:16:45.857 "qid": 0, 00:16:45.857 "state": "enabled", 00:16:45.857 "thread": "nvmf_tgt_poll_group_000", 00:16:45.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:45.857 "listen_address": { 00:16:45.857 "trtype": "TCP", 00:16:45.857 "adrfam": "IPv4", 00:16:45.857 "traddr": "10.0.0.2", 00:16:45.857 "trsvcid": "4420" 00:16:45.857 }, 00:16:45.857 "peer_address": { 00:16:45.857 "trtype": "TCP", 00:16:45.857 "adrfam": "IPv4", 00:16:45.857 "traddr": "10.0.0.1", 00:16:45.857 "trsvcid": "44780" 00:16:45.857 }, 00:16:45.857 "auth": { 00:16:45.857 "state": "completed", 00:16:45.857 "digest": "sha256", 00:16:45.857 "dhgroup": "ffdhe2048" 00:16:45.857 } 00:16:45.857 } 00:16:45.857 ]' 00:16:45.857 15:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.857 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.857 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.116 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:46.116 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.116 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.116 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.116 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.375 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:16:46.375 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:16:46.942 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.942 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.942 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.942 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.942 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.942 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.942 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:46.942 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:46.942 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:46.942 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.942 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.942 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:46.942 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:46.942 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.942 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.942 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.942 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.942 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.943 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.943 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.943 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.201 00:16:47.201 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.201 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.201 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.460 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.460 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.460 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.460 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.460 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.460 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.460 { 00:16:47.460 "cntlid": 11, 00:16:47.460 "qid": 0, 00:16:47.460 "state": "enabled", 00:16:47.460 "thread": "nvmf_tgt_poll_group_000", 00:16:47.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:47.460 "listen_address": { 00:16:47.460 "trtype": "TCP", 00:16:47.460 "adrfam": "IPv4", 00:16:47.460 "traddr": "10.0.0.2", 00:16:47.460 "trsvcid": "4420" 00:16:47.460 }, 00:16:47.460 "peer_address": { 00:16:47.460 "trtype": "TCP", 00:16:47.460 "adrfam": "IPv4", 00:16:47.460 "traddr": "10.0.0.1", 00:16:47.460 "trsvcid": "44810" 00:16:47.460 }, 00:16:47.460 "auth": { 00:16:47.460 "state": "completed", 00:16:47.460 "digest": "sha256", 00:16:47.460 "dhgroup": "ffdhe2048" 00:16:47.460 } 00:16:47.460 } 00:16:47.460 ]' 00:16:47.460 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.460 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.460 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.460 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.460 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.719 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.719 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.719 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.719 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:16:47.719 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:16:48.285 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.285 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:48.285 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.285 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.285 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.285 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.285 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.285 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.544 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:48.544 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.544 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.544 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:48.544 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:48.544 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.544 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.544 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.544 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.544 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.544 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.544 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.544 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.802 00:16:48.802 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.802 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.802 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.059 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.059 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.059 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.059 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.059 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.059 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.059 { 00:16:49.059 "cntlid": 13, 00:16:49.059 "qid": 0, 00:16:49.059 "state": "enabled", 00:16:49.059 "thread": "nvmf_tgt_poll_group_000", 00:16:49.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:49.059 "listen_address": { 00:16:49.059 "trtype": "TCP", 00:16:49.059 "adrfam": "IPv4", 00:16:49.059 "traddr": "10.0.0.2", 00:16:49.059 "trsvcid": "4420" 00:16:49.059 }, 00:16:49.059 "peer_address": { 00:16:49.059 "trtype": "TCP", 00:16:49.059 "adrfam": "IPv4", 00:16:49.059 "traddr": "10.0.0.1", 00:16:49.059 "trsvcid": "44848" 00:16:49.059 }, 00:16:49.059 "auth": { 00:16:49.059 "state": "completed", 00:16:49.059 "digest": "sha256", 00:16:49.059 "dhgroup": "ffdhe2048" 00:16:49.059 } 00:16:49.059 } 00:16:49.059 ]' 00:16:49.059 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.059 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.059 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.059 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:49.059 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.059 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.059 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.059 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.317 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:16:49.317 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:16:49.884 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.884 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.884 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.884 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.884 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.884 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.884 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:49.884 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.142 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:50.142 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.142 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.142 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:50.142 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.142 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.142 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:50.142 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.142 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.142 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.142 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.142 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.142 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.400 00:16:50.400 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.400 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.400 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.658 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.658 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.658 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.658 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.658 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.658 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.658 { 00:16:50.658 "cntlid": 15, 00:16:50.658 "qid": 0, 00:16:50.658 "state": "enabled", 00:16:50.658 "thread": "nvmf_tgt_poll_group_000", 00:16:50.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:50.658 "listen_address": { 00:16:50.658 "trtype": "TCP", 00:16:50.658 "adrfam": "IPv4", 00:16:50.658 "traddr": "10.0.0.2", 00:16:50.658 "trsvcid": "4420" 00:16:50.658 }, 00:16:50.658 "peer_address": { 00:16:50.658 "trtype": "TCP", 00:16:50.658 "adrfam": "IPv4", 00:16:50.658 "traddr": "10.0.0.1", 00:16:50.658 "trsvcid": "44868" 00:16:50.658 }, 00:16:50.658 "auth": { 00:16:50.658 "state": "completed", 00:16:50.658 "digest": "sha256", 00:16:50.658 "dhgroup": "ffdhe2048" 00:16:50.658 } 00:16:50.658 } 00:16:50.658 ]' 00:16:50.658 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.658 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.658 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.658 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:50.658 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.658 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.658 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.658 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.916 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:16:50.916 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:16:51.482 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.482 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:51.482 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.482 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.482 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.483 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.483 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.483 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.483 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.741 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:51.741 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.741 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.741 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:51.741 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:51.741 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.741 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.741 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.741 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.741 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.741 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.741 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.741 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.999 00:16:51.999 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.999 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.999 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.257 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.257 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.257 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.257 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.257 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.257 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.257 { 00:16:52.257 "cntlid": 17, 00:16:52.257 "qid": 0, 00:16:52.257 "state": "enabled", 00:16:52.257 "thread": "nvmf_tgt_poll_group_000", 00:16:52.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:52.257 "listen_address": { 00:16:52.257 "trtype": "TCP", 00:16:52.257 "adrfam": "IPv4", 00:16:52.257 "traddr": "10.0.0.2", 00:16:52.257 "trsvcid": "4420" 00:16:52.257 }, 00:16:52.257 "peer_address": { 00:16:52.257 "trtype": "TCP", 00:16:52.257 "adrfam": "IPv4", 00:16:52.257 "traddr": "10.0.0.1", 00:16:52.257 "trsvcid": "44890" 00:16:52.257 }, 00:16:52.257 "auth": { 00:16:52.257 "state": "completed", 00:16:52.257 "digest": "sha256", 00:16:52.257 "dhgroup": "ffdhe3072" 00:16:52.257 } 00:16:52.257 } 00:16:52.257 ]' 00:16:52.257 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.257 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.258 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.258 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.258 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.258 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.258 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.258 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.516 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:16:52.516 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:16:53.084 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.084 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:53.084 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.084 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.084 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.084 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.084 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:53.084 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:53.342 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:53.342 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.342 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.342 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:53.342 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:53.342 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.342 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.342 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.342 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.342 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.342 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.342 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.342 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.601 00:16:53.601 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.601 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.601 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.859 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.859 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.859 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.859 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.859 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.859 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.859 { 00:16:53.859 "cntlid": 19, 00:16:53.859 "qid": 0, 00:16:53.859 "state": "enabled", 00:16:53.859 "thread": "nvmf_tgt_poll_group_000", 00:16:53.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:53.859 "listen_address": { 00:16:53.859 "trtype": "TCP", 00:16:53.859 "adrfam": "IPv4", 00:16:53.859 "traddr": "10.0.0.2", 00:16:53.859 "trsvcid": "4420" 00:16:53.859 }, 00:16:53.859 "peer_address": { 00:16:53.859 "trtype": "TCP", 00:16:53.859 "adrfam": "IPv4", 00:16:53.859 "traddr": "10.0.0.1", 00:16:53.859 "trsvcid": "44914" 00:16:53.859 }, 00:16:53.859 "auth": { 00:16:53.859 "state": "completed", 00:16:53.859 "digest": "sha256", 00:16:53.859 "dhgroup": "ffdhe3072" 00:16:53.859 } 00:16:53.859 } 00:16:53.859 ]' 00:16:53.859 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.859 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.859 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.859 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:53.859 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.859 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.859 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.859 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.118 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:16:54.118 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:16:54.685 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.685 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:54.685 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.685 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.685 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.685 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.685 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:54.685 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:54.944 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:54.944 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.944 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.944 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:54.944 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.944 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.944 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.944 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.944 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.944 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.944 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.944 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.944 15:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.201 00:16:55.201 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.201 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.201 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.458 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.458 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.458 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.458 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.458 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.458 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.458 { 00:16:55.458 "cntlid": 21, 00:16:55.458 "qid": 0, 00:16:55.458 "state": "enabled", 00:16:55.458 "thread": "nvmf_tgt_poll_group_000", 00:16:55.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:55.458 "listen_address": { 00:16:55.458 "trtype": "TCP", 00:16:55.458 "adrfam": "IPv4", 00:16:55.458 "traddr": "10.0.0.2", 00:16:55.459 "trsvcid": "4420" 00:16:55.459 }, 00:16:55.459 "peer_address": { 00:16:55.459 "trtype": "TCP", 00:16:55.459 "adrfam": "IPv4", 00:16:55.459 "traddr": "10.0.0.1", 00:16:55.459 "trsvcid": "56600" 00:16:55.459 }, 00:16:55.459 "auth": { 00:16:55.459 "state": "completed", 00:16:55.459 "digest": "sha256", 00:16:55.459 "dhgroup": "ffdhe3072" 00:16:55.459 } 00:16:55.459 } 00:16:55.459 ]' 00:16:55.459 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.459 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.459 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.459 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:55.459 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.459 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.459 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.459 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.716 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:16:55.716 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:16:56.281 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.281 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:56.281 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.281 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.281 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.281 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.281 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:56.281 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:56.539 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:56.539 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.539 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.539 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:56.539 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.539 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.539 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:56.539 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.539 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.539 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.539 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.539 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.539 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.798 00:16:56.798 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.798 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.798 15:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.057 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.057 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.057 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.057 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.057 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.057 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.057 { 00:16:57.057 "cntlid": 23, 00:16:57.057 "qid": 0, 00:16:57.057 "state": "enabled", 00:16:57.057 "thread": "nvmf_tgt_poll_group_000", 00:16:57.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:57.057 "listen_address": { 00:16:57.057 "trtype": "TCP", 00:16:57.057 "adrfam": "IPv4", 00:16:57.057 "traddr": "10.0.0.2", 00:16:57.057 "trsvcid": "4420" 00:16:57.057 }, 00:16:57.057 "peer_address": { 00:16:57.057 "trtype": "TCP", 00:16:57.057 "adrfam": "IPv4", 00:16:57.057 "traddr": "10.0.0.1", 00:16:57.057 "trsvcid": "56630" 00:16:57.057 }, 00:16:57.057 "auth": { 00:16:57.057 "state": "completed", 00:16:57.057 "digest": "sha256", 00:16:57.057 "dhgroup": "ffdhe3072" 00:16:57.057 } 00:16:57.057 } 00:16:57.057 ]' 00:16:57.057 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.057 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.057 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.057 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:57.057 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.057 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.057 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.057 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.315 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:16:57.315 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:16:57.881 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.881 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:57.881 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.881 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.882 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.882 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.882 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.882 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.882 15:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.140 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:58.140 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.140 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.140 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:58.140 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.140 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.140 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.140 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.140 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.140 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.140 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.140 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.140 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.398 00:16:58.398 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.398 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.398 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.656 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.656 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.656 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.656 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.656 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.656 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.656 { 00:16:58.656 "cntlid": 25, 00:16:58.656 "qid": 0, 00:16:58.656 "state": "enabled", 00:16:58.656 "thread": "nvmf_tgt_poll_group_000", 00:16:58.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:58.656 "listen_address": { 00:16:58.656 "trtype": "TCP", 00:16:58.656 "adrfam": "IPv4", 00:16:58.656 "traddr": "10.0.0.2", 00:16:58.656 "trsvcid": "4420" 00:16:58.656 }, 00:16:58.656 "peer_address": { 00:16:58.656 "trtype": "TCP", 00:16:58.656 "adrfam": "IPv4", 00:16:58.656 "traddr": "10.0.0.1", 00:16:58.656 "trsvcid": "56666" 00:16:58.656 }, 00:16:58.656 "auth": { 00:16:58.656 "state": "completed", 00:16:58.656 "digest": "sha256", 00:16:58.656 "dhgroup": "ffdhe4096" 00:16:58.656 } 00:16:58.656 } 00:16:58.656 ]' 00:16:58.656 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.656 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.656 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.656 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.656 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.656 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.656 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.656 15:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.915 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:16:58.915 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:16:59.481 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.481 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:59.481 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.481 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.481 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.481 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.481 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.481 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.743 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:59.743 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.743 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.743 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:59.743 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:59.743 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.743 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.743 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.743 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.743 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.743 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.743 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.743 15:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.001 00:17:00.001 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.001 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.001 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.260 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.260 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.260 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.260 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.260 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.260 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.260 { 00:17:00.260 "cntlid": 27, 00:17:00.260 "qid": 0, 00:17:00.260 "state": "enabled", 00:17:00.260 "thread": "nvmf_tgt_poll_group_000", 00:17:00.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:00.260 "listen_address": { 00:17:00.260 "trtype": "TCP", 00:17:00.260 "adrfam": "IPv4", 00:17:00.260 "traddr": "10.0.0.2", 00:17:00.260 "trsvcid": "4420" 00:17:00.260 }, 00:17:00.260 "peer_address": { 00:17:00.260 "trtype": "TCP", 00:17:00.260 "adrfam": "IPv4", 00:17:00.260 "traddr": "10.0.0.1", 00:17:00.260 "trsvcid": "56694" 00:17:00.260 }, 00:17:00.260 "auth": { 00:17:00.260 "state": "completed", 00:17:00.260 "digest": "sha256", 00:17:00.260 "dhgroup": "ffdhe4096" 00:17:00.260 } 00:17:00.260 } 00:17:00.260 ]' 00:17:00.260 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.260 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.260 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.260 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:00.260 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.260 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.260 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.260 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.518 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:00.518 15:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:01.121 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.121 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:01.121 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.121 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.121 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.121 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.121 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.121 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.379 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:01.379 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.379 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.379 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:01.379 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:01.379 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.379 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.379 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.379 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.379 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.379 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.379 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.379 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.636 00:17:01.636 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.636 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.636 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.894 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.894 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.894 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.894 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.894 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.894 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.894 { 00:17:01.894 "cntlid": 29, 00:17:01.894 "qid": 0, 00:17:01.894 "state": "enabled", 00:17:01.894 "thread": "nvmf_tgt_poll_group_000", 00:17:01.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:01.894 "listen_address": { 00:17:01.894 "trtype": "TCP", 00:17:01.894 "adrfam": "IPv4", 00:17:01.894 "traddr": "10.0.0.2", 00:17:01.894 "trsvcid": "4420" 00:17:01.894 }, 00:17:01.894 "peer_address": { 00:17:01.894 "trtype": "TCP", 00:17:01.894 "adrfam": "IPv4", 00:17:01.894 "traddr": "10.0.0.1", 00:17:01.894 "trsvcid": "56728" 00:17:01.894 }, 00:17:01.894 "auth": { 00:17:01.894 "state": "completed", 00:17:01.894 "digest": "sha256", 00:17:01.894 "dhgroup": "ffdhe4096" 00:17:01.894 } 00:17:01.894 } 00:17:01.894 ]' 00:17:01.894 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.894 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.894 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.894 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:01.894 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.894 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.894 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.894 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.152 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:02.152 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:02.719 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.719 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:02.719 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.719 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.719 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.719 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.719 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:02.719 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:02.978 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:02.978 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.978 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:02.978 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:02.978 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:02.978 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.978 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:02.978 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.978 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.978 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.978 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.978 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.978 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.236 00:17:03.236 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.237 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.237 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.496 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.496 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.496 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.496 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.496 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.496 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.496 { 00:17:03.496 "cntlid": 31, 00:17:03.496 "qid": 0, 00:17:03.496 "state": "enabled", 00:17:03.496 "thread": "nvmf_tgt_poll_group_000", 00:17:03.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:03.496 "listen_address": { 00:17:03.496 "trtype": "TCP", 00:17:03.496 "adrfam": "IPv4", 00:17:03.496 "traddr": "10.0.0.2", 00:17:03.496 "trsvcid": "4420" 00:17:03.496 }, 00:17:03.496 "peer_address": { 00:17:03.496 "trtype": "TCP", 00:17:03.496 "adrfam": "IPv4", 00:17:03.496 "traddr": "10.0.0.1", 00:17:03.496 "trsvcid": "56754" 00:17:03.496 }, 00:17:03.496 "auth": { 00:17:03.496 "state": "completed", 00:17:03.496 "digest": "sha256", 00:17:03.496 "dhgroup": "ffdhe4096" 00:17:03.496 } 00:17:03.496 } 00:17:03.496 ]' 00:17:03.496 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.496 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.496 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.496 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:03.496 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.496 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.496 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.496 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.754 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:03.754 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:04.320 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.320 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:04.320 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.320 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.320 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.320 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.320 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.320 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:04.320 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:04.579 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:04.579 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.579 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:04.580 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:04.580 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:04.580 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.580 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.580 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.580 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.580 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.580 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.580 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.580 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.839 00:17:04.839 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.839 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.839 15:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.098 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.098 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.098 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.098 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.098 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.098 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.098 { 00:17:05.098 "cntlid": 33, 00:17:05.098 "qid": 0, 00:17:05.098 "state": "enabled", 00:17:05.098 "thread": "nvmf_tgt_poll_group_000", 00:17:05.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:05.098 "listen_address": { 00:17:05.098 "trtype": "TCP", 00:17:05.098 "adrfam": "IPv4", 00:17:05.098 "traddr": "10.0.0.2", 00:17:05.098 "trsvcid": "4420" 00:17:05.098 }, 00:17:05.098 "peer_address": { 00:17:05.098 "trtype": "TCP", 00:17:05.098 "adrfam": "IPv4", 00:17:05.098 "traddr": "10.0.0.1", 00:17:05.098 "trsvcid": "42878" 00:17:05.098 }, 00:17:05.098 "auth": { 00:17:05.098 "state": "completed", 00:17:05.098 "digest": "sha256", 00:17:05.098 "dhgroup": "ffdhe6144" 00:17:05.098 } 00:17:05.098 } 00:17:05.098 ]' 00:17:05.098 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.098 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.098 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.098 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.098 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.098 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.098 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.098 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.357 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:05.357 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:05.922 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.922 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.922 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.922 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.922 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.922 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.922 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.922 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.180 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:06.180 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.180 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.180 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:06.180 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:06.180 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.180 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.181 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.181 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.181 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.181 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.181 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.181 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.439 00:17:06.439 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.439 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.439 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.696 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.696 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.696 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.696 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.696 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.696 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.696 { 00:17:06.696 "cntlid": 35, 00:17:06.696 "qid": 0, 00:17:06.696 "state": "enabled", 00:17:06.696 "thread": "nvmf_tgt_poll_group_000", 00:17:06.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:06.696 "listen_address": { 00:17:06.696 "trtype": "TCP", 00:17:06.696 "adrfam": "IPv4", 00:17:06.696 "traddr": "10.0.0.2", 00:17:06.696 "trsvcid": "4420" 00:17:06.696 }, 00:17:06.696 "peer_address": { 00:17:06.696 "trtype": "TCP", 00:17:06.696 "adrfam": "IPv4", 00:17:06.696 "traddr": "10.0.0.1", 00:17:06.696 "trsvcid": "42908" 00:17:06.696 }, 00:17:06.696 "auth": { 00:17:06.696 "state": "completed", 00:17:06.696 "digest": "sha256", 00:17:06.696 "dhgroup": "ffdhe6144" 00:17:06.696 } 00:17:06.696 } 00:17:06.696 ]' 00:17:06.696 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.696 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.696 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.696 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.696 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.955 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.955 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.955 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.955 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:06.955 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:07.523 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.523 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:07.523 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.523 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.523 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.523 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.523 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:07.523 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:07.781 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:07.781 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.781 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:07.781 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:07.781 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.781 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.781 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.781 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.781 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.781 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.781 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.781 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.781 15:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.040 00:17:08.298 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.298 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.298 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.298 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.298 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.298 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.298 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.298 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.298 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.298 { 00:17:08.298 "cntlid": 37, 00:17:08.298 "qid": 0, 00:17:08.298 "state": "enabled", 00:17:08.298 "thread": "nvmf_tgt_poll_group_000", 00:17:08.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:08.298 "listen_address": { 00:17:08.298 "trtype": "TCP", 00:17:08.298 "adrfam": "IPv4", 00:17:08.298 "traddr": "10.0.0.2", 00:17:08.299 "trsvcid": "4420" 00:17:08.299 }, 00:17:08.299 "peer_address": { 00:17:08.299 "trtype": "TCP", 00:17:08.299 "adrfam": "IPv4", 00:17:08.299 "traddr": "10.0.0.1", 00:17:08.299 "trsvcid": "42938" 00:17:08.299 }, 00:17:08.299 "auth": { 00:17:08.299 "state": "completed", 00:17:08.299 "digest": "sha256", 00:17:08.299 "dhgroup": "ffdhe6144" 00:17:08.299 } 00:17:08.299 } 00:17:08.299 ]' 00:17:08.299 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.558 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.558 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.558 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:08.558 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.558 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.558 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.558 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.816 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:08.816 15:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.383 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.951 00:17:09.951 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.951 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.951 15:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.951 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.951 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.951 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.951 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.951 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.951 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.951 { 00:17:09.951 "cntlid": 39, 00:17:09.951 "qid": 0, 00:17:09.951 "state": "enabled", 00:17:09.951 "thread": "nvmf_tgt_poll_group_000", 00:17:09.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:09.951 "listen_address": { 00:17:09.951 "trtype": "TCP", 00:17:09.951 "adrfam": "IPv4", 00:17:09.951 "traddr": "10.0.0.2", 00:17:09.951 "trsvcid": "4420" 00:17:09.951 }, 00:17:09.951 "peer_address": { 00:17:09.951 "trtype": "TCP", 00:17:09.951 "adrfam": "IPv4", 00:17:09.951 "traddr": "10.0.0.1", 00:17:09.951 "trsvcid": "42958" 00:17:09.951 }, 00:17:09.951 "auth": { 00:17:09.951 "state": "completed", 00:17:09.951 "digest": "sha256", 00:17:09.951 "dhgroup": "ffdhe6144" 00:17:09.951 } 00:17:09.951 } 00:17:09.951 ]' 00:17:09.951 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.210 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.210 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.210 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:10.210 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.210 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.210 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.210 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.469 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:10.469 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:11.037 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.037 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.037 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.037 15:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.037 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.606 00:17:11.606 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.606 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.606 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.865 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.865 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.865 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.865 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.865 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.865 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.865 { 00:17:11.865 "cntlid": 41, 00:17:11.865 "qid": 0, 00:17:11.865 "state": "enabled", 00:17:11.865 "thread": "nvmf_tgt_poll_group_000", 00:17:11.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:11.865 "listen_address": { 00:17:11.865 "trtype": "TCP", 00:17:11.865 "adrfam": "IPv4", 00:17:11.865 "traddr": "10.0.0.2", 00:17:11.865 "trsvcid": "4420" 00:17:11.865 }, 00:17:11.865 "peer_address": { 00:17:11.865 "trtype": "TCP", 00:17:11.865 "adrfam": "IPv4", 00:17:11.865 "traddr": "10.0.0.1", 00:17:11.865 "trsvcid": "42994" 00:17:11.865 }, 00:17:11.865 "auth": { 00:17:11.865 "state": "completed", 00:17:11.865 "digest": "sha256", 00:17:11.865 "dhgroup": "ffdhe8192" 00:17:11.865 } 00:17:11.865 } 00:17:11.865 ]' 00:17:11.865 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.865 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.865 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.865 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.865 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.865 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.865 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.865 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.125 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:12.125 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:12.692 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.692 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:12.692 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.692 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.692 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.692 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.692 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.692 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.950 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:12.950 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.950 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.950 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:12.950 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.950 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.950 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.950 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.951 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.951 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.951 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.951 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.951 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.518 00:17:13.518 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.518 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.518 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.518 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.518 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.518 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.518 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.518 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.518 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.518 { 00:17:13.518 "cntlid": 43, 00:17:13.518 "qid": 0, 00:17:13.518 "state": "enabled", 00:17:13.518 "thread": "nvmf_tgt_poll_group_000", 00:17:13.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:13.518 "listen_address": { 00:17:13.518 "trtype": "TCP", 00:17:13.518 "adrfam": "IPv4", 00:17:13.518 "traddr": "10.0.0.2", 00:17:13.518 "trsvcid": "4420" 00:17:13.518 }, 00:17:13.518 "peer_address": { 00:17:13.518 "trtype": "TCP", 00:17:13.518 "adrfam": "IPv4", 00:17:13.518 "traddr": "10.0.0.1", 00:17:13.518 "trsvcid": "43030" 00:17:13.518 }, 00:17:13.518 "auth": { 00:17:13.518 "state": "completed", 00:17:13.518 "digest": "sha256", 00:17:13.518 "dhgroup": "ffdhe8192" 00:17:13.518 } 00:17:13.518 } 00:17:13.518 ]' 00:17:13.518 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.778 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.778 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.778 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.778 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.778 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.778 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.778 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.037 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:14.037 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.605 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.303 00:17:15.303 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.303 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.303 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.562 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.562 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.562 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.562 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.562 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.562 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.562 { 00:17:15.562 "cntlid": 45, 00:17:15.562 "qid": 0, 00:17:15.562 "state": "enabled", 00:17:15.562 "thread": "nvmf_tgt_poll_group_000", 00:17:15.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:15.562 "listen_address": { 00:17:15.562 "trtype": "TCP", 00:17:15.562 "adrfam": "IPv4", 00:17:15.562 "traddr": "10.0.0.2", 00:17:15.562 "trsvcid": "4420" 00:17:15.562 }, 00:17:15.562 "peer_address": { 00:17:15.562 "trtype": "TCP", 00:17:15.562 "adrfam": "IPv4", 00:17:15.562 "traddr": "10.0.0.1", 00:17:15.562 "trsvcid": "48792" 00:17:15.562 }, 00:17:15.562 "auth": { 00:17:15.562 "state": "completed", 00:17:15.562 "digest": "sha256", 00:17:15.562 "dhgroup": "ffdhe8192" 00:17:15.562 } 00:17:15.562 } 00:17:15.562 ]' 00:17:15.562 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.562 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.562 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.562 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:15.562 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.562 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.562 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.562 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.821 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:15.821 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:16.393 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.393 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:16.393 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.393 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.393 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.393 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.393 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:16.394 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:16.394 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:16.394 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.394 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:16.394 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:16.394 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.394 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.394 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:16.394 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.394 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.394 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.394 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.394 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.394 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.960 00:17:16.960 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.960 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.960 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.219 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.219 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.219 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.219 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.219 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.219 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.219 { 00:17:17.219 "cntlid": 47, 00:17:17.219 "qid": 0, 00:17:17.219 "state": "enabled", 00:17:17.219 "thread": "nvmf_tgt_poll_group_000", 00:17:17.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:17.219 "listen_address": { 00:17:17.219 "trtype": "TCP", 00:17:17.219 "adrfam": "IPv4", 00:17:17.219 "traddr": "10.0.0.2", 00:17:17.219 "trsvcid": "4420" 00:17:17.219 }, 00:17:17.219 "peer_address": { 00:17:17.219 "trtype": "TCP", 00:17:17.219 "adrfam": "IPv4", 00:17:17.219 "traddr": "10.0.0.1", 00:17:17.219 "trsvcid": "48826" 00:17:17.219 }, 00:17:17.219 "auth": { 00:17:17.219 "state": "completed", 00:17:17.219 "digest": "sha256", 00:17:17.219 "dhgroup": "ffdhe8192" 00:17:17.219 } 00:17:17.219 } 00:17:17.219 ]' 00:17:17.219 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.219 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.219 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.219 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.219 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.219 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.219 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.219 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.478 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:17.478 15:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:18.045 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.045 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:18.045 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.045 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.045 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.045 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:18.045 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.045 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.045 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:18.045 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:18.303 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:18.303 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.303 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.303 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:18.303 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.303 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.303 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.303 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.303 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.303 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.303 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.303 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.303 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.561 00:17:18.561 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.561 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.561 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.819 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.819 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.819 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.819 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.819 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.819 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.819 { 00:17:18.819 "cntlid": 49, 00:17:18.819 "qid": 0, 00:17:18.819 "state": "enabled", 00:17:18.819 "thread": "nvmf_tgt_poll_group_000", 00:17:18.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:18.819 "listen_address": { 00:17:18.819 "trtype": "TCP", 00:17:18.819 "adrfam": "IPv4", 00:17:18.819 "traddr": "10.0.0.2", 00:17:18.819 "trsvcid": "4420" 00:17:18.819 }, 00:17:18.819 "peer_address": { 00:17:18.819 "trtype": "TCP", 00:17:18.819 "adrfam": "IPv4", 00:17:18.819 "traddr": "10.0.0.1", 00:17:18.819 "trsvcid": "48854" 00:17:18.819 }, 00:17:18.819 "auth": { 00:17:18.819 "state": "completed", 00:17:18.820 "digest": "sha384", 00:17:18.820 "dhgroup": "null" 00:17:18.820 } 00:17:18.820 } 00:17:18.820 ]' 00:17:18.820 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.820 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.820 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.820 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:18.820 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.820 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.820 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.820 15:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.077 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:19.077 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:19.644 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.644 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.644 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.644 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.644 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.644 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.644 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:19.644 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:19.902 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:19.902 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.902 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.902 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:19.902 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:19.902 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.902 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.902 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.902 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.902 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.902 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.902 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.902 15:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.159 00:17:20.159 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.159 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.159 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.159 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.159 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.159 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.159 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.159 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.159 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.159 { 00:17:20.159 "cntlid": 51, 00:17:20.159 "qid": 0, 00:17:20.159 "state": "enabled", 00:17:20.159 "thread": "nvmf_tgt_poll_group_000", 00:17:20.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:20.159 "listen_address": { 00:17:20.159 "trtype": "TCP", 00:17:20.159 "adrfam": "IPv4", 00:17:20.159 "traddr": "10.0.0.2", 00:17:20.159 "trsvcid": "4420" 00:17:20.159 }, 00:17:20.159 "peer_address": { 00:17:20.159 "trtype": "TCP", 00:17:20.159 "adrfam": "IPv4", 00:17:20.159 "traddr": "10.0.0.1", 00:17:20.159 "trsvcid": "48876" 00:17:20.159 }, 00:17:20.159 "auth": { 00:17:20.159 "state": "completed", 00:17:20.159 "digest": "sha384", 00:17:20.159 "dhgroup": "null" 00:17:20.159 } 00:17:20.159 } 00:17:20.159 ]' 00:17:20.160 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.416 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.416 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.416 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:20.416 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.416 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.416 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.416 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.674 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:20.674 15:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.241 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.499 00:17:21.499 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.499 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.499 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.757 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.757 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.757 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.757 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.757 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.757 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.757 { 00:17:21.757 "cntlid": 53, 00:17:21.757 "qid": 0, 00:17:21.757 "state": "enabled", 00:17:21.757 "thread": "nvmf_tgt_poll_group_000", 00:17:21.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:21.757 "listen_address": { 00:17:21.757 "trtype": "TCP", 00:17:21.757 "adrfam": "IPv4", 00:17:21.757 "traddr": "10.0.0.2", 00:17:21.757 "trsvcid": "4420" 00:17:21.757 }, 00:17:21.757 "peer_address": { 00:17:21.757 "trtype": "TCP", 00:17:21.757 "adrfam": "IPv4", 00:17:21.757 "traddr": "10.0.0.1", 00:17:21.757 "trsvcid": "48902" 00:17:21.757 }, 00:17:21.757 "auth": { 00:17:21.757 "state": "completed", 00:17:21.757 "digest": "sha384", 00:17:21.757 "dhgroup": "null" 00:17:21.757 } 00:17:21.757 } 00:17:21.757 ]' 00:17:21.757 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.757 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.757 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.015 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:22.015 15:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.015 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.015 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.015 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.272 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:22.272 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.837 15:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.095 00:17:23.095 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.095 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.095 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.354 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.354 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.354 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.354 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.354 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.354 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.354 { 00:17:23.354 "cntlid": 55, 00:17:23.354 "qid": 0, 00:17:23.354 "state": "enabled", 00:17:23.354 "thread": "nvmf_tgt_poll_group_000", 00:17:23.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:23.354 "listen_address": { 00:17:23.354 "trtype": "TCP", 00:17:23.354 "adrfam": "IPv4", 00:17:23.354 "traddr": "10.0.0.2", 00:17:23.354 "trsvcid": "4420" 00:17:23.354 }, 00:17:23.354 "peer_address": { 00:17:23.354 "trtype": "TCP", 00:17:23.354 "adrfam": "IPv4", 00:17:23.354 "traddr": "10.0.0.1", 00:17:23.354 "trsvcid": "48938" 00:17:23.354 }, 00:17:23.354 "auth": { 00:17:23.354 "state": "completed", 00:17:23.354 "digest": "sha384", 00:17:23.354 "dhgroup": "null" 00:17:23.354 } 00:17:23.354 } 00:17:23.354 ]' 00:17:23.354 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.354 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.354 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.613 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:23.613 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.613 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.613 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.613 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.613 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:23.613 15:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:24.194 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.194 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:24.195 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.195 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.195 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.195 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.195 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.195 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.195 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.452 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:24.452 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.452 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.452 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:24.452 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.453 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.453 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.453 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.453 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.453 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.453 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.453 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.453 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.711 00:17:24.711 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.711 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.711 15:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.969 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.969 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.969 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.969 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.969 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.969 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.969 { 00:17:24.969 "cntlid": 57, 00:17:24.969 "qid": 0, 00:17:24.969 "state": "enabled", 00:17:24.969 "thread": "nvmf_tgt_poll_group_000", 00:17:24.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:24.969 "listen_address": { 00:17:24.969 "trtype": "TCP", 00:17:24.969 "adrfam": "IPv4", 00:17:24.969 "traddr": "10.0.0.2", 00:17:24.969 "trsvcid": "4420" 00:17:24.969 }, 00:17:24.969 "peer_address": { 00:17:24.969 "trtype": "TCP", 00:17:24.969 "adrfam": "IPv4", 00:17:24.969 "traddr": "10.0.0.1", 00:17:24.969 "trsvcid": "36030" 00:17:24.969 }, 00:17:24.969 "auth": { 00:17:24.969 "state": "completed", 00:17:24.969 "digest": "sha384", 00:17:24.969 "dhgroup": "ffdhe2048" 00:17:24.969 } 00:17:24.969 } 00:17:24.969 ]' 00:17:24.969 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.969 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.969 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.969 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.969 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.969 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.969 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.969 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.227 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:25.227 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:25.793 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.793 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:25.793 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.793 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.793 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.793 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.793 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:25.793 15:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:26.052 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:26.052 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.052 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.052 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:26.052 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.052 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.052 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.052 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.052 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.052 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.052 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.052 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.052 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.311 00:17:26.311 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.311 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.311 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.570 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.570 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.570 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.570 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.570 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.570 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.570 { 00:17:26.570 "cntlid": 59, 00:17:26.570 "qid": 0, 00:17:26.570 "state": "enabled", 00:17:26.570 "thread": "nvmf_tgt_poll_group_000", 00:17:26.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:26.570 "listen_address": { 00:17:26.570 "trtype": "TCP", 00:17:26.570 "adrfam": "IPv4", 00:17:26.570 "traddr": "10.0.0.2", 00:17:26.570 "trsvcid": "4420" 00:17:26.570 }, 00:17:26.570 "peer_address": { 00:17:26.570 "trtype": "TCP", 00:17:26.570 "adrfam": "IPv4", 00:17:26.570 "traddr": "10.0.0.1", 00:17:26.570 "trsvcid": "36056" 00:17:26.570 }, 00:17:26.570 "auth": { 00:17:26.570 "state": "completed", 00:17:26.570 "digest": "sha384", 00:17:26.570 "dhgroup": "ffdhe2048" 00:17:26.570 } 00:17:26.570 } 00:17:26.570 ]' 00:17:26.570 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.570 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.570 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.570 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.570 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.570 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.570 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.570 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.828 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:26.828 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:27.393 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.393 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.393 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.393 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.393 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.393 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.393 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:27.393 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:27.651 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:27.651 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.651 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.651 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:27.651 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:27.651 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.651 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.651 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.651 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.651 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.651 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.651 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.651 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.910 00:17:27.910 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.910 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.910 15:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.168 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.168 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.168 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.168 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.168 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.168 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.168 { 00:17:28.168 "cntlid": 61, 00:17:28.169 "qid": 0, 00:17:28.169 "state": "enabled", 00:17:28.169 "thread": "nvmf_tgt_poll_group_000", 00:17:28.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:28.169 "listen_address": { 00:17:28.169 "trtype": "TCP", 00:17:28.169 "adrfam": "IPv4", 00:17:28.169 "traddr": "10.0.0.2", 00:17:28.169 "trsvcid": "4420" 00:17:28.169 }, 00:17:28.169 "peer_address": { 00:17:28.169 "trtype": "TCP", 00:17:28.169 "adrfam": "IPv4", 00:17:28.169 "traddr": "10.0.0.1", 00:17:28.169 "trsvcid": "36070" 00:17:28.169 }, 00:17:28.169 "auth": { 00:17:28.169 "state": "completed", 00:17:28.169 "digest": "sha384", 00:17:28.169 "dhgroup": "ffdhe2048" 00:17:28.169 } 00:17:28.169 } 00:17:28.169 ]' 00:17:28.169 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.169 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.169 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.169 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:28.169 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.169 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.169 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.169 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.428 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:28.428 15:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:28.996 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.996 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:28.996 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.996 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.996 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.996 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.996 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:28.996 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:29.255 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:29.255 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.255 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.255 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:29.255 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.255 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.255 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:29.255 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.255 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.255 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.255 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.255 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.255 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.513 00:17:29.513 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.513 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.513 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.771 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.771 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.771 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.771 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.771 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.771 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.771 { 00:17:29.771 "cntlid": 63, 00:17:29.771 "qid": 0, 00:17:29.771 "state": "enabled", 00:17:29.771 "thread": "nvmf_tgt_poll_group_000", 00:17:29.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:29.771 "listen_address": { 00:17:29.771 "trtype": "TCP", 00:17:29.771 "adrfam": "IPv4", 00:17:29.771 "traddr": "10.0.0.2", 00:17:29.771 "trsvcid": "4420" 00:17:29.771 }, 00:17:29.771 "peer_address": { 00:17:29.771 "trtype": "TCP", 00:17:29.771 "adrfam": "IPv4", 00:17:29.771 "traddr": "10.0.0.1", 00:17:29.771 "trsvcid": "36094" 00:17:29.771 }, 00:17:29.771 "auth": { 00:17:29.771 "state": "completed", 00:17:29.771 "digest": "sha384", 00:17:29.771 "dhgroup": "ffdhe2048" 00:17:29.771 } 00:17:29.771 } 00:17:29.771 ]' 00:17:29.771 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.771 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.771 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.771 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.771 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.771 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.771 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.771 15:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.030 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:30.030 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:30.596 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.596 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:30.596 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.596 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.597 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.597 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.597 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.597 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.597 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.855 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:30.855 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.855 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.855 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:30.855 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:30.855 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.855 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.855 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.855 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.855 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.855 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.855 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.855 15:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.113 00:17:31.113 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.113 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.113 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.113 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.113 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.113 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.113 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.372 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.372 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.372 { 00:17:31.372 "cntlid": 65, 00:17:31.372 "qid": 0, 00:17:31.372 "state": "enabled", 00:17:31.372 "thread": "nvmf_tgt_poll_group_000", 00:17:31.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:31.372 "listen_address": { 00:17:31.372 "trtype": "TCP", 00:17:31.372 "adrfam": "IPv4", 00:17:31.372 "traddr": "10.0.0.2", 00:17:31.372 "trsvcid": "4420" 00:17:31.372 }, 00:17:31.372 "peer_address": { 00:17:31.372 "trtype": "TCP", 00:17:31.372 "adrfam": "IPv4", 00:17:31.372 "traddr": "10.0.0.1", 00:17:31.372 "trsvcid": "36122" 00:17:31.372 }, 00:17:31.372 "auth": { 00:17:31.372 "state": "completed", 00:17:31.372 "digest": "sha384", 00:17:31.372 "dhgroup": "ffdhe3072" 00:17:31.372 } 00:17:31.372 } 00:17:31.372 ]' 00:17:31.372 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.372 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.372 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.372 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.372 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.372 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.372 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.372 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.630 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:31.630 15:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:32.197 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.197 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:32.197 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.197 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.197 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.197 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.197 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:32.197 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:32.455 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:32.455 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.455 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.455 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:32.455 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.455 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.455 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.455 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.455 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.455 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.455 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.455 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.455 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.713 00:17:32.713 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.713 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.713 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.713 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.713 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.713 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.713 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.713 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.713 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.713 { 00:17:32.713 "cntlid": 67, 00:17:32.713 "qid": 0, 00:17:32.713 "state": "enabled", 00:17:32.713 "thread": "nvmf_tgt_poll_group_000", 00:17:32.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:32.713 "listen_address": { 00:17:32.713 "trtype": "TCP", 00:17:32.713 "adrfam": "IPv4", 00:17:32.713 "traddr": "10.0.0.2", 00:17:32.713 "trsvcid": "4420" 00:17:32.713 }, 00:17:32.713 "peer_address": { 00:17:32.713 "trtype": "TCP", 00:17:32.713 "adrfam": "IPv4", 00:17:32.713 "traddr": "10.0.0.1", 00:17:32.713 "trsvcid": "36134" 00:17:32.713 }, 00:17:32.713 "auth": { 00:17:32.713 "state": "completed", 00:17:32.713 "digest": "sha384", 00:17:32.713 "dhgroup": "ffdhe3072" 00:17:32.713 } 00:17:32.713 } 00:17:32.713 ]' 00:17:32.713 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.972 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.972 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.972 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:32.972 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.972 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.972 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.972 15:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.256 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:33.256 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.823 15:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.082 00:17:34.082 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.082 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.082 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.340 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.340 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.340 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.340 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.340 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.340 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.340 { 00:17:34.340 "cntlid": 69, 00:17:34.340 "qid": 0, 00:17:34.340 "state": "enabled", 00:17:34.340 "thread": "nvmf_tgt_poll_group_000", 00:17:34.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:34.340 "listen_address": { 00:17:34.340 "trtype": "TCP", 00:17:34.340 "adrfam": "IPv4", 00:17:34.340 "traddr": "10.0.0.2", 00:17:34.340 "trsvcid": "4420" 00:17:34.340 }, 00:17:34.340 "peer_address": { 00:17:34.340 "trtype": "TCP", 00:17:34.340 "adrfam": "IPv4", 00:17:34.340 "traddr": "10.0.0.1", 00:17:34.340 "trsvcid": "36160" 00:17:34.340 }, 00:17:34.340 "auth": { 00:17:34.340 "state": "completed", 00:17:34.340 "digest": "sha384", 00:17:34.340 "dhgroup": "ffdhe3072" 00:17:34.340 } 00:17:34.340 } 00:17:34.340 ]' 00:17:34.340 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.340 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.340 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.340 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:34.340 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.340 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.340 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.341 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.599 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:34.599 15:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:35.166 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.166 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:35.166 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.166 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.166 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.166 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.166 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:35.166 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:35.424 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:35.424 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.424 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.424 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:35.424 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.424 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.424 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:35.424 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.424 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.424 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.424 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.425 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.425 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.682 00:17:35.682 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.682 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.682 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.941 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.941 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.941 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.941 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.941 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.941 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.941 { 00:17:35.941 "cntlid": 71, 00:17:35.941 "qid": 0, 00:17:35.941 "state": "enabled", 00:17:35.941 "thread": "nvmf_tgt_poll_group_000", 00:17:35.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:35.941 "listen_address": { 00:17:35.941 "trtype": "TCP", 00:17:35.941 "adrfam": "IPv4", 00:17:35.941 "traddr": "10.0.0.2", 00:17:35.941 "trsvcid": "4420" 00:17:35.941 }, 00:17:35.941 "peer_address": { 00:17:35.941 "trtype": "TCP", 00:17:35.941 "adrfam": "IPv4", 00:17:35.941 "traddr": "10.0.0.1", 00:17:35.941 "trsvcid": "38344" 00:17:35.941 }, 00:17:35.941 "auth": { 00:17:35.941 "state": "completed", 00:17:35.941 "digest": "sha384", 00:17:35.941 "dhgroup": "ffdhe3072" 00:17:35.941 } 00:17:35.941 } 00:17:35.941 ]' 00:17:35.941 15:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.941 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.941 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.941 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.941 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.941 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.941 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.941 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.200 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:36.200 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:36.767 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.767 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:36.767 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.767 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.767 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.767 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.767 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.767 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:36.767 15:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.026 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:37.026 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.026 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.026 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:37.026 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.026 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.026 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.026 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.026 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.026 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.026 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.026 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.026 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.284 00:17:37.284 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.284 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.284 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.543 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.543 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.543 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.543 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.543 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.543 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.543 { 00:17:37.543 "cntlid": 73, 00:17:37.543 "qid": 0, 00:17:37.543 "state": "enabled", 00:17:37.543 "thread": "nvmf_tgt_poll_group_000", 00:17:37.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:37.543 "listen_address": { 00:17:37.543 "trtype": "TCP", 00:17:37.543 "adrfam": "IPv4", 00:17:37.543 "traddr": "10.0.0.2", 00:17:37.543 "trsvcid": "4420" 00:17:37.543 }, 00:17:37.543 "peer_address": { 00:17:37.543 "trtype": "TCP", 00:17:37.543 "adrfam": "IPv4", 00:17:37.543 "traddr": "10.0.0.1", 00:17:37.543 "trsvcid": "38374" 00:17:37.543 }, 00:17:37.543 "auth": { 00:17:37.543 "state": "completed", 00:17:37.543 "digest": "sha384", 00:17:37.543 "dhgroup": "ffdhe4096" 00:17:37.543 } 00:17:37.543 } 00:17:37.543 ]' 00:17:37.543 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.543 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.543 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.543 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:37.543 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.543 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.543 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.543 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.802 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:37.802 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:38.369 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.369 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:38.369 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.369 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.369 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.369 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.369 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.369 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.628 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:38.628 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.628 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.628 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:38.628 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:38.628 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.628 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.628 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.628 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.628 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.628 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.628 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.628 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.887 00:17:38.887 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.887 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.887 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.145 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.145 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.145 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.145 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.145 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.145 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.145 { 00:17:39.145 "cntlid": 75, 00:17:39.145 "qid": 0, 00:17:39.145 "state": "enabled", 00:17:39.145 "thread": "nvmf_tgt_poll_group_000", 00:17:39.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:39.145 "listen_address": { 00:17:39.145 "trtype": "TCP", 00:17:39.145 "adrfam": "IPv4", 00:17:39.145 "traddr": "10.0.0.2", 00:17:39.145 "trsvcid": "4420" 00:17:39.145 }, 00:17:39.145 "peer_address": { 00:17:39.145 "trtype": "TCP", 00:17:39.145 "adrfam": "IPv4", 00:17:39.145 "traddr": "10.0.0.1", 00:17:39.145 "trsvcid": "38398" 00:17:39.145 }, 00:17:39.145 "auth": { 00:17:39.145 "state": "completed", 00:17:39.145 "digest": "sha384", 00:17:39.145 "dhgroup": "ffdhe4096" 00:17:39.145 } 00:17:39.145 } 00:17:39.145 ]' 00:17:39.145 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.145 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.145 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.145 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.145 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.145 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.145 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.145 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.404 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:39.404 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:39.970 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.970 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:39.970 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.970 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.970 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.970 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.970 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:39.970 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.229 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:40.229 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.229 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.229 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:40.229 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.229 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.229 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.229 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.229 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.229 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.229 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.229 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.229 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.488 00:17:40.488 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.488 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.488 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.746 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.746 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.746 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.746 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.746 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.746 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.746 { 00:17:40.746 "cntlid": 77, 00:17:40.746 "qid": 0, 00:17:40.746 "state": "enabled", 00:17:40.746 "thread": "nvmf_tgt_poll_group_000", 00:17:40.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:40.747 "listen_address": { 00:17:40.747 "trtype": "TCP", 00:17:40.747 "adrfam": "IPv4", 00:17:40.747 "traddr": "10.0.0.2", 00:17:40.747 "trsvcid": "4420" 00:17:40.747 }, 00:17:40.747 "peer_address": { 00:17:40.747 "trtype": "TCP", 00:17:40.747 "adrfam": "IPv4", 00:17:40.747 "traddr": "10.0.0.1", 00:17:40.747 "trsvcid": "38438" 00:17:40.747 }, 00:17:40.747 "auth": { 00:17:40.747 "state": "completed", 00:17:40.747 "digest": "sha384", 00:17:40.747 "dhgroup": "ffdhe4096" 00:17:40.747 } 00:17:40.747 } 00:17:40.747 ]' 00:17:40.747 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.747 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.747 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.747 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.747 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.747 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.747 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.747 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.005 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:41.005 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:41.572 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.572 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:41.572 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.572 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.572 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.572 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.572 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:41.572 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:41.830 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:41.830 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.830 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:41.830 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:41.830 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:41.830 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.830 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:41.830 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.830 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.830 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.830 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.830 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.830 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.089 00:17:42.089 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.089 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.089 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.348 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.348 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.348 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.348 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.348 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.348 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.348 { 00:17:42.348 "cntlid": 79, 00:17:42.348 "qid": 0, 00:17:42.348 "state": "enabled", 00:17:42.348 "thread": "nvmf_tgt_poll_group_000", 00:17:42.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:42.348 "listen_address": { 00:17:42.348 "trtype": "TCP", 00:17:42.348 "adrfam": "IPv4", 00:17:42.348 "traddr": "10.0.0.2", 00:17:42.348 "trsvcid": "4420" 00:17:42.348 }, 00:17:42.348 "peer_address": { 00:17:42.348 "trtype": "TCP", 00:17:42.348 "adrfam": "IPv4", 00:17:42.348 "traddr": "10.0.0.1", 00:17:42.348 "trsvcid": "38472" 00:17:42.348 }, 00:17:42.348 "auth": { 00:17:42.348 "state": "completed", 00:17:42.348 "digest": "sha384", 00:17:42.348 "dhgroup": "ffdhe4096" 00:17:42.348 } 00:17:42.348 } 00:17:42.348 ]' 00:17:42.348 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.348 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.348 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.348 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.348 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.348 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.348 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.348 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.607 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:42.608 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:43.175 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.175 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:43.175 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.175 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.175 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.175 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.175 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.175 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:43.175 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:43.434 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:43.434 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.434 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.434 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:43.434 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:43.434 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.434 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.434 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.434 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.434 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.434 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.434 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.435 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.693 00:17:43.693 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.693 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.693 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.952 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.952 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.952 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.952 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.952 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.952 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.952 { 00:17:43.952 "cntlid": 81, 00:17:43.952 "qid": 0, 00:17:43.952 "state": "enabled", 00:17:43.952 "thread": "nvmf_tgt_poll_group_000", 00:17:43.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:43.952 "listen_address": { 00:17:43.952 "trtype": "TCP", 00:17:43.952 "adrfam": "IPv4", 00:17:43.952 "traddr": "10.0.0.2", 00:17:43.952 "trsvcid": "4420" 00:17:43.952 }, 00:17:43.952 "peer_address": { 00:17:43.952 "trtype": "TCP", 00:17:43.952 "adrfam": "IPv4", 00:17:43.952 "traddr": "10.0.0.1", 00:17:43.952 "trsvcid": "38500" 00:17:43.952 }, 00:17:43.952 "auth": { 00:17:43.952 "state": "completed", 00:17:43.952 "digest": "sha384", 00:17:43.952 "dhgroup": "ffdhe6144" 00:17:43.952 } 00:17:43.952 } 00:17:43.952 ]' 00:17:43.952 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.952 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.952 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.952 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.952 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.952 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.952 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.952 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.211 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:44.211 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:44.779 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.779 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:44.779 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.779 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.779 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.779 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.779 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.779 15:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:45.037 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:45.037 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.037 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.037 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:45.037 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:45.037 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.037 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.037 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.037 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.037 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.037 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.037 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.037 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.296 00:17:45.296 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.296 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.296 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.555 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.555 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.555 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.555 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.555 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.555 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.555 { 00:17:45.555 "cntlid": 83, 00:17:45.555 "qid": 0, 00:17:45.555 "state": "enabled", 00:17:45.555 "thread": "nvmf_tgt_poll_group_000", 00:17:45.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:45.555 "listen_address": { 00:17:45.555 "trtype": "TCP", 00:17:45.555 "adrfam": "IPv4", 00:17:45.555 "traddr": "10.0.0.2", 00:17:45.555 "trsvcid": "4420" 00:17:45.555 }, 00:17:45.555 "peer_address": { 00:17:45.555 "trtype": "TCP", 00:17:45.555 "adrfam": "IPv4", 00:17:45.556 "traddr": "10.0.0.1", 00:17:45.556 "trsvcid": "51084" 00:17:45.556 }, 00:17:45.556 "auth": { 00:17:45.556 "state": "completed", 00:17:45.556 "digest": "sha384", 00:17:45.556 "dhgroup": "ffdhe6144" 00:17:45.556 } 00:17:45.556 } 00:17:45.556 ]' 00:17:45.556 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.556 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.556 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.556 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.556 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.815 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.815 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.815 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.815 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:45.815 15:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:46.383 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.383 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:46.383 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.383 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.383 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.383 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.383 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:46.383 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:46.642 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:46.642 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.642 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:46.642 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:46.642 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:46.642 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.642 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.642 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.642 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.642 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.642 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.642 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.642 15:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.901 00:17:47.160 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.160 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.160 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.160 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.160 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.160 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.160 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.160 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.160 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.160 { 00:17:47.160 "cntlid": 85, 00:17:47.160 "qid": 0, 00:17:47.160 "state": "enabled", 00:17:47.160 "thread": "nvmf_tgt_poll_group_000", 00:17:47.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:47.160 "listen_address": { 00:17:47.160 "trtype": "TCP", 00:17:47.160 "adrfam": "IPv4", 00:17:47.160 "traddr": "10.0.0.2", 00:17:47.160 "trsvcid": "4420" 00:17:47.160 }, 00:17:47.160 "peer_address": { 00:17:47.160 "trtype": "TCP", 00:17:47.160 "adrfam": "IPv4", 00:17:47.160 "traddr": "10.0.0.1", 00:17:47.160 "trsvcid": "51114" 00:17:47.160 }, 00:17:47.160 "auth": { 00:17:47.160 "state": "completed", 00:17:47.160 "digest": "sha384", 00:17:47.160 "dhgroup": "ffdhe6144" 00:17:47.160 } 00:17:47.160 } 00:17:47.160 ]' 00:17:47.160 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.418 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.418 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.418 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:47.418 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.418 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.418 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.418 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.677 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:47.677 15:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.244 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.811 00:17:48.811 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.811 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.811 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.811 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.811 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.811 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.811 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.811 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.811 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.811 { 00:17:48.811 "cntlid": 87, 00:17:48.811 "qid": 0, 00:17:48.811 "state": "enabled", 00:17:48.811 "thread": "nvmf_tgt_poll_group_000", 00:17:48.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:48.811 "listen_address": { 00:17:48.811 "trtype": "TCP", 00:17:48.811 "adrfam": "IPv4", 00:17:48.811 "traddr": "10.0.0.2", 00:17:48.811 "trsvcid": "4420" 00:17:48.811 }, 00:17:48.811 "peer_address": { 00:17:48.811 "trtype": "TCP", 00:17:48.811 "adrfam": "IPv4", 00:17:48.811 "traddr": "10.0.0.1", 00:17:48.811 "trsvcid": "51142" 00:17:48.811 }, 00:17:48.811 "auth": { 00:17:48.811 "state": "completed", 00:17:48.811 "digest": "sha384", 00:17:48.811 "dhgroup": "ffdhe6144" 00:17:48.811 } 00:17:48.811 } 00:17:48.811 ]' 00:17:48.811 15:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.068 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.068 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.068 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.068 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.068 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.068 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.068 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.326 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:49.326 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:49.891 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.892 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:49.892 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.892 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.892 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.892 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.892 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.892 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:49.892 15:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:49.892 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:49.892 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.892 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.892 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:49.892 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.892 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.892 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.892 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.892 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.892 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.892 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.892 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.892 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.459 00:17:50.459 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.459 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.459 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.717 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.717 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.717 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.717 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.717 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.717 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.717 { 00:17:50.717 "cntlid": 89, 00:17:50.717 "qid": 0, 00:17:50.717 "state": "enabled", 00:17:50.717 "thread": "nvmf_tgt_poll_group_000", 00:17:50.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:50.717 "listen_address": { 00:17:50.717 "trtype": "TCP", 00:17:50.717 "adrfam": "IPv4", 00:17:50.717 "traddr": "10.0.0.2", 00:17:50.717 "trsvcid": "4420" 00:17:50.717 }, 00:17:50.717 "peer_address": { 00:17:50.717 "trtype": "TCP", 00:17:50.717 "adrfam": "IPv4", 00:17:50.717 "traddr": "10.0.0.1", 00:17:50.717 "trsvcid": "51174" 00:17:50.717 }, 00:17:50.717 "auth": { 00:17:50.717 "state": "completed", 00:17:50.717 "digest": "sha384", 00:17:50.717 "dhgroup": "ffdhe8192" 00:17:50.717 } 00:17:50.717 } 00:17:50.717 ]' 00:17:50.717 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.717 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.717 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.717 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.717 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.717 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.717 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.717 15:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.975 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:50.975 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:51.540 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.540 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:51.540 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.540 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.540 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.540 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.540 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.540 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.799 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:51.799 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.799 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:51.799 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:51.799 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:51.799 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.799 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.799 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.799 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.799 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.799 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.799 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.799 15:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.366 00:17:52.366 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.366 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.366 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.624 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.624 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.624 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.624 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.624 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.624 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.624 { 00:17:52.624 "cntlid": 91, 00:17:52.624 "qid": 0, 00:17:52.624 "state": "enabled", 00:17:52.624 "thread": "nvmf_tgt_poll_group_000", 00:17:52.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:52.624 "listen_address": { 00:17:52.624 "trtype": "TCP", 00:17:52.624 "adrfam": "IPv4", 00:17:52.624 "traddr": "10.0.0.2", 00:17:52.624 "trsvcid": "4420" 00:17:52.624 }, 00:17:52.624 "peer_address": { 00:17:52.624 "trtype": "TCP", 00:17:52.624 "adrfam": "IPv4", 00:17:52.624 "traddr": "10.0.0.1", 00:17:52.624 "trsvcid": "51206" 00:17:52.624 }, 00:17:52.624 "auth": { 00:17:52.624 "state": "completed", 00:17:52.624 "digest": "sha384", 00:17:52.624 "dhgroup": "ffdhe8192" 00:17:52.624 } 00:17:52.624 } 00:17:52.624 ]' 00:17:52.624 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.624 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.624 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.624 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.624 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.624 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.624 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.624 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.947 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:52.947 15:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.578 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.144 00:17:54.144 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.144 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.144 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.402 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.402 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.402 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.402 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.402 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.402 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.402 { 00:17:54.402 "cntlid": 93, 00:17:54.402 "qid": 0, 00:17:54.402 "state": "enabled", 00:17:54.402 "thread": "nvmf_tgt_poll_group_000", 00:17:54.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:54.402 "listen_address": { 00:17:54.402 "trtype": "TCP", 00:17:54.402 "adrfam": "IPv4", 00:17:54.402 "traddr": "10.0.0.2", 00:17:54.402 "trsvcid": "4420" 00:17:54.402 }, 00:17:54.402 "peer_address": { 00:17:54.402 "trtype": "TCP", 00:17:54.402 "adrfam": "IPv4", 00:17:54.402 "traddr": "10.0.0.1", 00:17:54.402 "trsvcid": "51234" 00:17:54.402 }, 00:17:54.402 "auth": { 00:17:54.402 "state": "completed", 00:17:54.402 "digest": "sha384", 00:17:54.402 "dhgroup": "ffdhe8192" 00:17:54.402 } 00:17:54.402 } 00:17:54.402 ]' 00:17:54.402 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.402 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.402 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.402 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.402 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.402 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.402 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.402 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.660 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:54.660 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:17:55.225 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.226 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:55.226 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.226 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.226 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.226 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.226 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:55.226 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:55.483 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:55.483 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.483 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:55.483 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:55.483 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.483 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.483 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:55.483 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.483 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.483 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.483 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.483 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.483 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.049 00:17:56.049 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.049 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.049 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.049 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.049 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.049 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.049 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.049 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.049 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.049 { 00:17:56.049 "cntlid": 95, 00:17:56.049 "qid": 0, 00:17:56.049 "state": "enabled", 00:17:56.049 "thread": "nvmf_tgt_poll_group_000", 00:17:56.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:56.049 "listen_address": { 00:17:56.049 "trtype": "TCP", 00:17:56.049 "adrfam": "IPv4", 00:17:56.049 "traddr": "10.0.0.2", 00:17:56.049 "trsvcid": "4420" 00:17:56.049 }, 00:17:56.049 "peer_address": { 00:17:56.049 "trtype": "TCP", 00:17:56.049 "adrfam": "IPv4", 00:17:56.049 "traddr": "10.0.0.1", 00:17:56.049 "trsvcid": "32864" 00:17:56.049 }, 00:17:56.049 "auth": { 00:17:56.049 "state": "completed", 00:17:56.049 "digest": "sha384", 00:17:56.049 "dhgroup": "ffdhe8192" 00:17:56.049 } 00:17:56.049 } 00:17:56.049 ]' 00:17:56.307 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.307 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.307 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.307 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.307 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.307 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.307 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.307 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.566 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:56.566 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:17:57.132 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.132 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:57.132 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.132 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.132 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.132 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:57.132 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.132 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.132 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.132 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.391 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.391 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.650 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.650 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.650 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.650 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.650 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.650 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.650 { 00:17:57.650 "cntlid": 97, 00:17:57.650 "qid": 0, 00:17:57.650 "state": "enabled", 00:17:57.650 "thread": "nvmf_tgt_poll_group_000", 00:17:57.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:57.650 "listen_address": { 00:17:57.650 "trtype": "TCP", 00:17:57.650 "adrfam": "IPv4", 00:17:57.650 "traddr": "10.0.0.2", 00:17:57.650 "trsvcid": "4420" 00:17:57.650 }, 00:17:57.650 "peer_address": { 00:17:57.650 "trtype": "TCP", 00:17:57.650 "adrfam": "IPv4", 00:17:57.650 "traddr": "10.0.0.1", 00:17:57.650 "trsvcid": "32888" 00:17:57.650 }, 00:17:57.650 "auth": { 00:17:57.650 "state": "completed", 00:17:57.650 "digest": "sha512", 00:17:57.650 "dhgroup": "null" 00:17:57.650 } 00:17:57.650 } 00:17:57.650 ]' 00:17:57.650 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.650 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.650 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.908 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:57.908 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.908 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.908 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.908 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.167 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:58.167 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.735 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.995 00:17:58.995 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.995 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.995 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.253 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.253 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.253 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.253 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.253 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.253 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.253 { 00:17:59.253 "cntlid": 99, 00:17:59.253 "qid": 0, 00:17:59.253 "state": "enabled", 00:17:59.253 "thread": "nvmf_tgt_poll_group_000", 00:17:59.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:59.253 "listen_address": { 00:17:59.253 "trtype": "TCP", 00:17:59.253 "adrfam": "IPv4", 00:17:59.253 "traddr": "10.0.0.2", 00:17:59.253 "trsvcid": "4420" 00:17:59.253 }, 00:17:59.253 "peer_address": { 00:17:59.253 "trtype": "TCP", 00:17:59.253 "adrfam": "IPv4", 00:17:59.253 "traddr": "10.0.0.1", 00:17:59.253 "trsvcid": "32914" 00:17:59.253 }, 00:17:59.253 "auth": { 00:17:59.253 "state": "completed", 00:17:59.253 "digest": "sha512", 00:17:59.253 "dhgroup": "null" 00:17:59.253 } 00:17:59.253 } 00:17:59.253 ]' 00:17:59.253 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.253 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.253 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.253 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:59.253 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.513 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.513 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.513 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.513 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:17:59.513 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:18:00.079 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.079 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.079 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.079 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.079 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.079 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.079 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:00.079 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:00.338 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:00.338 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.338 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.338 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:00.338 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.338 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.338 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.338 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.338 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.338 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.338 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.338 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.338 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.596 00:18:00.596 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.596 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.596 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.853 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.853 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.853 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.853 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.853 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.853 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.853 { 00:18:00.853 "cntlid": 101, 00:18:00.853 "qid": 0, 00:18:00.853 "state": "enabled", 00:18:00.853 "thread": "nvmf_tgt_poll_group_000", 00:18:00.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:00.854 "listen_address": { 00:18:00.854 "trtype": "TCP", 00:18:00.854 "adrfam": "IPv4", 00:18:00.854 "traddr": "10.0.0.2", 00:18:00.854 "trsvcid": "4420" 00:18:00.854 }, 00:18:00.854 "peer_address": { 00:18:00.854 "trtype": "TCP", 00:18:00.854 "adrfam": "IPv4", 00:18:00.854 "traddr": "10.0.0.1", 00:18:00.854 "trsvcid": "32942" 00:18:00.854 }, 00:18:00.854 "auth": { 00:18:00.854 "state": "completed", 00:18:00.854 "digest": "sha512", 00:18:00.854 "dhgroup": "null" 00:18:00.854 } 00:18:00.854 } 00:18:00.854 ]' 00:18:00.854 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.854 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.854 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.854 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:00.854 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.854 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.854 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.854 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.112 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:18:01.112 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:18:01.678 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.678 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:01.678 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.678 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.678 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.678 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.678 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:01.678 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:01.934 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:01.934 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.934 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.934 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:01.935 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:01.935 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.935 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:01.935 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.935 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.935 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.935 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:01.935 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.935 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.192 00:18:02.192 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.192 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.192 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.450 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.450 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.450 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.450 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.450 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.450 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.450 { 00:18:02.450 "cntlid": 103, 00:18:02.450 "qid": 0, 00:18:02.450 "state": "enabled", 00:18:02.450 "thread": "nvmf_tgt_poll_group_000", 00:18:02.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:02.450 "listen_address": { 00:18:02.450 "trtype": "TCP", 00:18:02.450 "adrfam": "IPv4", 00:18:02.451 "traddr": "10.0.0.2", 00:18:02.451 "trsvcid": "4420" 00:18:02.451 }, 00:18:02.451 "peer_address": { 00:18:02.451 "trtype": "TCP", 00:18:02.451 "adrfam": "IPv4", 00:18:02.451 "traddr": "10.0.0.1", 00:18:02.451 "trsvcid": "32962" 00:18:02.451 }, 00:18:02.451 "auth": { 00:18:02.451 "state": "completed", 00:18:02.451 "digest": "sha512", 00:18:02.451 "dhgroup": "null" 00:18:02.451 } 00:18:02.451 } 00:18:02.451 ]' 00:18:02.451 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.451 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.451 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.451 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:02.451 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.451 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.451 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.451 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.709 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:02.709 15:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:03.276 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.276 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:03.276 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.276 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.276 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.276 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.276 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.276 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.276 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.535 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:03.535 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.535 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.535 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:03.535 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.535 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.535 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.535 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.535 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.535 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.535 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.535 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.535 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.793 00:18:03.793 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.793 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.793 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.793 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.051 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.051 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.051 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.051 15:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.051 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.051 { 00:18:04.051 "cntlid": 105, 00:18:04.051 "qid": 0, 00:18:04.051 "state": "enabled", 00:18:04.051 "thread": "nvmf_tgt_poll_group_000", 00:18:04.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:04.051 "listen_address": { 00:18:04.051 "trtype": "TCP", 00:18:04.051 "adrfam": "IPv4", 00:18:04.051 "traddr": "10.0.0.2", 00:18:04.051 "trsvcid": "4420" 00:18:04.051 }, 00:18:04.051 "peer_address": { 00:18:04.051 "trtype": "TCP", 00:18:04.051 "adrfam": "IPv4", 00:18:04.051 "traddr": "10.0.0.1", 00:18:04.051 "trsvcid": "32980" 00:18:04.051 }, 00:18:04.051 "auth": { 00:18:04.051 "state": "completed", 00:18:04.051 "digest": "sha512", 00:18:04.051 "dhgroup": "ffdhe2048" 00:18:04.051 } 00:18:04.051 } 00:18:04.051 ]' 00:18:04.051 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.051 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.051 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.051 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.051 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.051 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.051 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.051 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.309 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:18:04.309 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:18:04.875 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.875 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:04.875 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.875 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.875 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.875 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.875 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.875 15:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:05.133 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:05.133 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.133 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.133 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:05.133 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.133 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.133 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.133 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.133 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.133 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.133 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.133 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.133 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.391 00:18:05.391 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.391 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.391 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.391 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.391 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.391 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.392 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.392 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.392 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.392 { 00:18:05.392 "cntlid": 107, 00:18:05.392 "qid": 0, 00:18:05.392 "state": "enabled", 00:18:05.392 "thread": "nvmf_tgt_poll_group_000", 00:18:05.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:05.392 "listen_address": { 00:18:05.392 "trtype": "TCP", 00:18:05.392 "adrfam": "IPv4", 00:18:05.392 "traddr": "10.0.0.2", 00:18:05.392 "trsvcid": "4420" 00:18:05.392 }, 00:18:05.392 "peer_address": { 00:18:05.392 "trtype": "TCP", 00:18:05.392 "adrfam": "IPv4", 00:18:05.392 "traddr": "10.0.0.1", 00:18:05.392 "trsvcid": "48502" 00:18:05.392 }, 00:18:05.392 "auth": { 00:18:05.392 "state": "completed", 00:18:05.392 "digest": "sha512", 00:18:05.392 "dhgroup": "ffdhe2048" 00:18:05.392 } 00:18:05.392 } 00:18:05.392 ]' 00:18:05.392 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.650 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.650 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.650 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.650 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.650 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.650 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.650 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.908 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:18:05.908 15:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:18:06.473 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.474 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.732 00:18:06.732 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.732 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.732 15:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.990 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.990 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.990 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.990 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.990 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.990 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.990 { 00:18:06.990 "cntlid": 109, 00:18:06.990 "qid": 0, 00:18:06.990 "state": "enabled", 00:18:06.990 "thread": "nvmf_tgt_poll_group_000", 00:18:06.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:06.990 "listen_address": { 00:18:06.990 "trtype": "TCP", 00:18:06.990 "adrfam": "IPv4", 00:18:06.990 "traddr": "10.0.0.2", 00:18:06.990 "trsvcid": "4420" 00:18:06.990 }, 00:18:06.990 "peer_address": { 00:18:06.990 "trtype": "TCP", 00:18:06.990 "adrfam": "IPv4", 00:18:06.990 "traddr": "10.0.0.1", 00:18:06.990 "trsvcid": "48520" 00:18:06.990 }, 00:18:06.990 "auth": { 00:18:06.990 "state": "completed", 00:18:06.990 "digest": "sha512", 00:18:06.990 "dhgroup": "ffdhe2048" 00:18:06.990 } 00:18:06.990 } 00:18:06.990 ]' 00:18:06.990 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.990 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.990 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.248 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.248 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.248 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.248 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.248 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.248 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:18:07.249 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:18:07.816 15:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.075 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.333 00:18:08.333 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.333 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.333 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.592 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.592 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.592 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.592 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.592 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.592 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.592 { 00:18:08.592 "cntlid": 111, 00:18:08.592 "qid": 0, 00:18:08.592 "state": "enabled", 00:18:08.592 "thread": "nvmf_tgt_poll_group_000", 00:18:08.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:08.592 "listen_address": { 00:18:08.592 "trtype": "TCP", 00:18:08.592 "adrfam": "IPv4", 00:18:08.592 "traddr": "10.0.0.2", 00:18:08.592 "trsvcid": "4420" 00:18:08.592 }, 00:18:08.592 "peer_address": { 00:18:08.592 "trtype": "TCP", 00:18:08.592 "adrfam": "IPv4", 00:18:08.592 "traddr": "10.0.0.1", 00:18:08.592 "trsvcid": "48546" 00:18:08.592 }, 00:18:08.592 "auth": { 00:18:08.592 "state": "completed", 00:18:08.592 "digest": "sha512", 00:18:08.592 "dhgroup": "ffdhe2048" 00:18:08.592 } 00:18:08.592 } 00:18:08.592 ]' 00:18:08.592 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.592 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.592 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.592 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:08.592 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.852 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.852 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.852 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.852 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:08.852 15:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:09.418 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.418 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:09.418 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.418 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.418 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.418 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.418 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.418 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.418 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.677 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:09.677 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.677 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.677 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:09.677 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.677 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.677 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.677 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.677 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.677 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.677 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.677 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.677 15:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.936 00:18:09.936 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.936 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.936 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.195 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.195 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.195 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.195 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.195 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.195 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.195 { 00:18:10.195 "cntlid": 113, 00:18:10.195 "qid": 0, 00:18:10.195 "state": "enabled", 00:18:10.195 "thread": "nvmf_tgt_poll_group_000", 00:18:10.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:10.195 "listen_address": { 00:18:10.195 "trtype": "TCP", 00:18:10.195 "adrfam": "IPv4", 00:18:10.195 "traddr": "10.0.0.2", 00:18:10.195 "trsvcid": "4420" 00:18:10.195 }, 00:18:10.195 "peer_address": { 00:18:10.195 "trtype": "TCP", 00:18:10.195 "adrfam": "IPv4", 00:18:10.195 "traddr": "10.0.0.1", 00:18:10.195 "trsvcid": "48570" 00:18:10.195 }, 00:18:10.195 "auth": { 00:18:10.195 "state": "completed", 00:18:10.195 "digest": "sha512", 00:18:10.195 "dhgroup": "ffdhe3072" 00:18:10.195 } 00:18:10.195 } 00:18:10.195 ]' 00:18:10.195 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.195 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.195 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.195 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.195 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.454 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.454 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.454 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.454 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:18:10.454 15:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:18:11.021 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.021 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.021 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.021 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.021 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.021 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.021 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.021 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.281 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:11.281 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.281 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.281 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:11.281 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:11.281 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.281 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.281 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.281 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.281 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.281 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.281 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.281 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.539 00:18:11.539 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.539 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.539 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.798 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.798 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.798 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.798 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.798 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.798 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.798 { 00:18:11.798 "cntlid": 115, 00:18:11.798 "qid": 0, 00:18:11.798 "state": "enabled", 00:18:11.798 "thread": "nvmf_tgt_poll_group_000", 00:18:11.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:11.798 "listen_address": { 00:18:11.798 "trtype": "TCP", 00:18:11.798 "adrfam": "IPv4", 00:18:11.798 "traddr": "10.0.0.2", 00:18:11.798 "trsvcid": "4420" 00:18:11.798 }, 00:18:11.798 "peer_address": { 00:18:11.798 "trtype": "TCP", 00:18:11.798 "adrfam": "IPv4", 00:18:11.798 "traddr": "10.0.0.1", 00:18:11.798 "trsvcid": "48598" 00:18:11.798 }, 00:18:11.798 "auth": { 00:18:11.798 "state": "completed", 00:18:11.798 "digest": "sha512", 00:18:11.798 "dhgroup": "ffdhe3072" 00:18:11.798 } 00:18:11.798 } 00:18:11.798 ]' 00:18:11.798 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.798 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.798 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.798 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.798 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.798 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.798 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.798 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.055 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:18:12.055 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:18:12.622 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.622 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:12.622 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.622 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.622 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.622 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.622 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.622 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.881 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:12.881 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.881 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.881 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:12.881 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:12.881 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.881 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.881 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.881 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.881 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.881 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.881 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.881 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.140 00:18:13.140 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.140 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.140 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.399 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.399 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.399 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.399 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.399 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.399 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.399 { 00:18:13.399 "cntlid": 117, 00:18:13.399 "qid": 0, 00:18:13.399 "state": "enabled", 00:18:13.399 "thread": "nvmf_tgt_poll_group_000", 00:18:13.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:13.399 "listen_address": { 00:18:13.399 "trtype": "TCP", 00:18:13.399 "adrfam": "IPv4", 00:18:13.399 "traddr": "10.0.0.2", 00:18:13.399 "trsvcid": "4420" 00:18:13.399 }, 00:18:13.399 "peer_address": { 00:18:13.399 "trtype": "TCP", 00:18:13.399 "adrfam": "IPv4", 00:18:13.399 "traddr": "10.0.0.1", 00:18:13.399 "trsvcid": "48628" 00:18:13.399 }, 00:18:13.399 "auth": { 00:18:13.399 "state": "completed", 00:18:13.399 "digest": "sha512", 00:18:13.399 "dhgroup": "ffdhe3072" 00:18:13.399 } 00:18:13.399 } 00:18:13.399 ]' 00:18:13.399 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.399 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.399 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.399 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.399 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.399 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.399 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.399 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.658 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:18:13.658 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:18:14.226 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.226 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:14.226 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.226 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.226 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.226 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.226 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:14.226 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:14.486 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:14.486 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.486 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.486 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:14.486 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:14.486 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.486 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:14.486 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.486 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.486 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.486 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:14.486 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.486 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.745 00:18:14.745 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.745 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.745 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.004 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.004 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.004 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.004 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.004 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.004 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.004 { 00:18:15.004 "cntlid": 119, 00:18:15.004 "qid": 0, 00:18:15.004 "state": "enabled", 00:18:15.004 "thread": "nvmf_tgt_poll_group_000", 00:18:15.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:15.004 "listen_address": { 00:18:15.004 "trtype": "TCP", 00:18:15.004 "adrfam": "IPv4", 00:18:15.004 "traddr": "10.0.0.2", 00:18:15.004 "trsvcid": "4420" 00:18:15.004 }, 00:18:15.004 "peer_address": { 00:18:15.004 "trtype": "TCP", 00:18:15.004 "adrfam": "IPv4", 00:18:15.004 "traddr": "10.0.0.1", 00:18:15.004 "trsvcid": "55384" 00:18:15.004 }, 00:18:15.004 "auth": { 00:18:15.004 "state": "completed", 00:18:15.004 "digest": "sha512", 00:18:15.004 "dhgroup": "ffdhe3072" 00:18:15.004 } 00:18:15.004 } 00:18:15.004 ]' 00:18:15.004 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.004 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.004 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.004 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:15.004 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.004 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.004 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.004 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.263 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:15.263 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:15.830 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.830 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:15.830 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.830 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.830 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.830 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.830 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.830 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:15.830 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.089 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:16.089 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.089 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.089 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:16.089 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:16.089 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.089 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.089 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.089 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.089 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.089 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.089 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.089 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.348 00:18:16.348 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.348 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.348 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.607 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.607 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.607 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.607 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.607 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.607 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.607 { 00:18:16.607 "cntlid": 121, 00:18:16.607 "qid": 0, 00:18:16.607 "state": "enabled", 00:18:16.607 "thread": "nvmf_tgt_poll_group_000", 00:18:16.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:16.607 "listen_address": { 00:18:16.607 "trtype": "TCP", 00:18:16.607 "adrfam": "IPv4", 00:18:16.607 "traddr": "10.0.0.2", 00:18:16.607 "trsvcid": "4420" 00:18:16.607 }, 00:18:16.607 "peer_address": { 00:18:16.607 "trtype": "TCP", 00:18:16.607 "adrfam": "IPv4", 00:18:16.607 "traddr": "10.0.0.1", 00:18:16.607 "trsvcid": "55412" 00:18:16.607 }, 00:18:16.607 "auth": { 00:18:16.607 "state": "completed", 00:18:16.607 "digest": "sha512", 00:18:16.607 "dhgroup": "ffdhe4096" 00:18:16.607 } 00:18:16.607 } 00:18:16.607 ]' 00:18:16.607 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.607 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.607 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.607 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.607 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.607 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.607 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.607 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.866 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:18:16.866 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:18:17.433 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.433 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:17.433 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.433 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.433 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.433 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.433 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:17.433 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:17.692 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:17.692 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.692 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.692 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:17.692 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:17.692 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.692 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.692 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.692 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.692 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.692 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.692 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.692 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.951 00:18:17.951 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.951 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.951 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.210 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.210 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.210 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.210 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.210 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.210 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.210 { 00:18:18.210 "cntlid": 123, 00:18:18.210 "qid": 0, 00:18:18.210 "state": "enabled", 00:18:18.210 "thread": "nvmf_tgt_poll_group_000", 00:18:18.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:18.210 "listen_address": { 00:18:18.210 "trtype": "TCP", 00:18:18.210 "adrfam": "IPv4", 00:18:18.210 "traddr": "10.0.0.2", 00:18:18.210 "trsvcid": "4420" 00:18:18.210 }, 00:18:18.210 "peer_address": { 00:18:18.210 "trtype": "TCP", 00:18:18.210 "adrfam": "IPv4", 00:18:18.210 "traddr": "10.0.0.1", 00:18:18.210 "trsvcid": "55440" 00:18:18.210 }, 00:18:18.210 "auth": { 00:18:18.210 "state": "completed", 00:18:18.210 "digest": "sha512", 00:18:18.210 "dhgroup": "ffdhe4096" 00:18:18.210 } 00:18:18.210 } 00:18:18.210 ]' 00:18:18.210 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.210 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.210 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.210 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:18.210 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.210 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.210 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.210 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.469 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:18:18.469 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:18:19.038 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.038 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:19.038 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.038 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.038 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.038 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.038 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:19.038 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:19.297 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:19.297 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.297 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.297 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:19.297 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:19.297 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.297 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.297 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.297 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.297 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.297 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.297 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.297 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.556 00:18:19.557 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.557 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.557 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.816 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.816 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.816 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.816 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.816 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.816 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.816 { 00:18:19.816 "cntlid": 125, 00:18:19.816 "qid": 0, 00:18:19.816 "state": "enabled", 00:18:19.816 "thread": "nvmf_tgt_poll_group_000", 00:18:19.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:19.816 "listen_address": { 00:18:19.816 "trtype": "TCP", 00:18:19.816 "adrfam": "IPv4", 00:18:19.816 "traddr": "10.0.0.2", 00:18:19.816 "trsvcid": "4420" 00:18:19.816 }, 00:18:19.816 "peer_address": { 00:18:19.816 "trtype": "TCP", 00:18:19.816 "adrfam": "IPv4", 00:18:19.816 "traddr": "10.0.0.1", 00:18:19.816 "trsvcid": "55464" 00:18:19.816 }, 00:18:19.816 "auth": { 00:18:19.816 "state": "completed", 00:18:19.816 "digest": "sha512", 00:18:19.816 "dhgroup": "ffdhe4096" 00:18:19.816 } 00:18:19.816 } 00:18:19.816 ]' 00:18:19.816 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.816 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.816 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.816 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.816 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.816 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.816 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.816 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.075 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:18:20.075 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:18:20.641 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.641 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:20.641 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.641 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.641 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.641 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.641 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:20.641 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:20.900 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:20.900 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.900 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.900 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:20.900 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:20.900 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.900 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:20.900 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.900 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.900 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.900 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.900 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.900 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.159 00:18:21.159 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.159 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.159 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.417 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.417 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.417 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.417 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.417 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.417 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.417 { 00:18:21.417 "cntlid": 127, 00:18:21.417 "qid": 0, 00:18:21.417 "state": "enabled", 00:18:21.417 "thread": "nvmf_tgt_poll_group_000", 00:18:21.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:21.417 "listen_address": { 00:18:21.417 "trtype": "TCP", 00:18:21.417 "adrfam": "IPv4", 00:18:21.417 "traddr": "10.0.0.2", 00:18:21.417 "trsvcid": "4420" 00:18:21.417 }, 00:18:21.417 "peer_address": { 00:18:21.417 "trtype": "TCP", 00:18:21.417 "adrfam": "IPv4", 00:18:21.417 "traddr": "10.0.0.1", 00:18:21.417 "trsvcid": "55490" 00:18:21.417 }, 00:18:21.417 "auth": { 00:18:21.417 "state": "completed", 00:18:21.417 "digest": "sha512", 00:18:21.417 "dhgroup": "ffdhe4096" 00:18:21.417 } 00:18:21.417 } 00:18:21.417 ]' 00:18:21.417 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.417 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.417 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.417 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:21.417 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.417 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.417 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.417 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.676 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:21.676 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:22.244 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.244 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:22.244 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.244 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.244 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.244 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.244 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.244 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:22.244 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:22.503 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:22.503 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.503 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.503 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:22.503 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:22.503 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.503 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.503 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.503 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.503 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.503 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.503 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.503 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.762 00:18:22.762 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.762 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.762 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.020 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.020 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.020 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.020 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.020 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.020 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.020 { 00:18:23.020 "cntlid": 129, 00:18:23.020 "qid": 0, 00:18:23.021 "state": "enabled", 00:18:23.021 "thread": "nvmf_tgt_poll_group_000", 00:18:23.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:23.021 "listen_address": { 00:18:23.021 "trtype": "TCP", 00:18:23.021 "adrfam": "IPv4", 00:18:23.021 "traddr": "10.0.0.2", 00:18:23.021 "trsvcid": "4420" 00:18:23.021 }, 00:18:23.021 "peer_address": { 00:18:23.021 "trtype": "TCP", 00:18:23.021 "adrfam": "IPv4", 00:18:23.021 "traddr": "10.0.0.1", 00:18:23.021 "trsvcid": "55520" 00:18:23.021 }, 00:18:23.021 "auth": { 00:18:23.021 "state": "completed", 00:18:23.021 "digest": "sha512", 00:18:23.021 "dhgroup": "ffdhe6144" 00:18:23.021 } 00:18:23.021 } 00:18:23.021 ]' 00:18:23.021 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.021 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.021 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.021 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.021 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.021 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.021 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.021 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.279 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:18:23.279 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:18:23.847 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.847 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:23.847 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.847 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.847 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.847 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.847 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:23.847 15:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.105 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:24.105 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.105 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.105 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:24.105 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:24.105 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.105 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.105 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.105 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.105 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.105 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.105 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.106 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.365 00:18:24.365 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.365 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.365 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.623 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.623 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.623 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.623 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.623 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.623 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.623 { 00:18:24.623 "cntlid": 131, 00:18:24.623 "qid": 0, 00:18:24.623 "state": "enabled", 00:18:24.623 "thread": "nvmf_tgt_poll_group_000", 00:18:24.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:24.623 "listen_address": { 00:18:24.623 "trtype": "TCP", 00:18:24.623 "adrfam": "IPv4", 00:18:24.623 "traddr": "10.0.0.2", 00:18:24.623 "trsvcid": "4420" 00:18:24.623 }, 00:18:24.623 "peer_address": { 00:18:24.623 "trtype": "TCP", 00:18:24.623 "adrfam": "IPv4", 00:18:24.623 "traddr": "10.0.0.1", 00:18:24.623 "trsvcid": "40572" 00:18:24.623 }, 00:18:24.623 "auth": { 00:18:24.623 "state": "completed", 00:18:24.623 "digest": "sha512", 00:18:24.623 "dhgroup": "ffdhe6144" 00:18:24.623 } 00:18:24.623 } 00:18:24.623 ]' 00:18:24.623 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.623 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.624 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.624 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:24.624 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.883 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.883 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.883 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.883 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:18:24.883 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:18:25.449 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.449 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:25.449 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.449 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.449 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.449 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.449 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.449 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.708 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:25.708 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.708 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:25.708 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:25.708 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:25.708 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.708 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.708 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.708 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.708 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.708 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.708 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.708 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.966 00:18:25.966 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.966 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.966 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.224 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.224 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.224 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.224 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.224 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.224 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.224 { 00:18:26.224 "cntlid": 133, 00:18:26.224 "qid": 0, 00:18:26.224 "state": "enabled", 00:18:26.224 "thread": "nvmf_tgt_poll_group_000", 00:18:26.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:26.224 "listen_address": { 00:18:26.224 "trtype": "TCP", 00:18:26.224 "adrfam": "IPv4", 00:18:26.224 "traddr": "10.0.0.2", 00:18:26.224 "trsvcid": "4420" 00:18:26.224 }, 00:18:26.224 "peer_address": { 00:18:26.224 "trtype": "TCP", 00:18:26.224 "adrfam": "IPv4", 00:18:26.224 "traddr": "10.0.0.1", 00:18:26.224 "trsvcid": "40586" 00:18:26.224 }, 00:18:26.224 "auth": { 00:18:26.224 "state": "completed", 00:18:26.224 "digest": "sha512", 00:18:26.224 "dhgroup": "ffdhe6144" 00:18:26.224 } 00:18:26.224 } 00:18:26.224 ]' 00:18:26.224 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.224 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.224 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.482 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.482 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.482 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.482 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.482 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.482 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:18:26.482 15:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:18:27.050 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.309 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.877 00:18:27.877 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.877 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.877 15:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.877 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.877 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.877 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.877 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.877 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.877 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.877 { 00:18:27.877 "cntlid": 135, 00:18:27.877 "qid": 0, 00:18:27.877 "state": "enabled", 00:18:27.877 "thread": "nvmf_tgt_poll_group_000", 00:18:27.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:27.877 "listen_address": { 00:18:27.877 "trtype": "TCP", 00:18:27.877 "adrfam": "IPv4", 00:18:27.877 "traddr": "10.0.0.2", 00:18:27.877 "trsvcid": "4420" 00:18:27.877 }, 00:18:27.877 "peer_address": { 00:18:27.877 "trtype": "TCP", 00:18:27.877 "adrfam": "IPv4", 00:18:27.877 "traddr": "10.0.0.1", 00:18:27.877 "trsvcid": "40632" 00:18:27.877 }, 00:18:27.877 "auth": { 00:18:27.877 "state": "completed", 00:18:27.877 "digest": "sha512", 00:18:27.877 "dhgroup": "ffdhe6144" 00:18:27.877 } 00:18:27.877 } 00:18:27.877 ]' 00:18:27.877 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.877 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.877 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.135 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:28.136 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.136 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.136 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.136 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.393 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:28.394 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:28.960 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.960 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:28.960 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.960 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.960 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.960 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.960 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:28.960 15:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:28.960 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:28.960 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.960 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.960 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:28.960 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:28.960 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.960 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.960 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.960 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.960 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.960 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.960 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.525 00:18:29.525 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.525 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.525 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.786 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.786 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.786 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.786 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.786 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.786 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.786 { 00:18:29.786 "cntlid": 137, 00:18:29.786 "qid": 0, 00:18:29.786 "state": "enabled", 00:18:29.786 "thread": "nvmf_tgt_poll_group_000", 00:18:29.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:29.786 "listen_address": { 00:18:29.786 "trtype": "TCP", 00:18:29.786 "adrfam": "IPv4", 00:18:29.786 "traddr": "10.0.0.2", 00:18:29.786 "trsvcid": "4420" 00:18:29.786 }, 00:18:29.786 "peer_address": { 00:18:29.786 "trtype": "TCP", 00:18:29.786 "adrfam": "IPv4", 00:18:29.786 "traddr": "10.0.0.1", 00:18:29.786 "trsvcid": "40658" 00:18:29.786 }, 00:18:29.786 "auth": { 00:18:29.786 "state": "completed", 00:18:29.786 "digest": "sha512", 00:18:29.786 "dhgroup": "ffdhe8192" 00:18:29.786 } 00:18:29.786 } 00:18:29.786 ]' 00:18:29.786 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.786 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.786 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.786 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.786 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.786 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.786 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.787 15:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.071 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:18:30.071 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:18:30.693 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.693 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:30.693 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.693 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.693 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.693 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.693 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:30.693 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:30.951 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:30.951 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.951 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.951 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:30.951 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.951 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.951 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.951 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.951 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.951 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.951 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.951 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.951 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.517 00:18:31.518 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.518 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.518 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.518 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.518 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.518 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.518 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.518 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.518 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.518 { 00:18:31.518 "cntlid": 139, 00:18:31.518 "qid": 0, 00:18:31.518 "state": "enabled", 00:18:31.518 "thread": "nvmf_tgt_poll_group_000", 00:18:31.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:31.518 "listen_address": { 00:18:31.518 "trtype": "TCP", 00:18:31.518 "adrfam": "IPv4", 00:18:31.518 "traddr": "10.0.0.2", 00:18:31.518 "trsvcid": "4420" 00:18:31.518 }, 00:18:31.518 "peer_address": { 00:18:31.518 "trtype": "TCP", 00:18:31.518 "adrfam": "IPv4", 00:18:31.518 "traddr": "10.0.0.1", 00:18:31.518 "trsvcid": "40698" 00:18:31.518 }, 00:18:31.518 "auth": { 00:18:31.518 "state": "completed", 00:18:31.518 "digest": "sha512", 00:18:31.518 "dhgroup": "ffdhe8192" 00:18:31.518 } 00:18:31.518 } 00:18:31.518 ]' 00:18:31.518 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.518 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.518 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.777 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.777 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.777 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.777 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.777 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.777 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:18:31.777 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: --dhchap-ctrl-secret DHHC-1:02:OWI2OTBlZDFiZGRiNzJkM2MxOWY4YWE4MTBlZmFhZDkxMTQ2NDkyMGFkZWIwMTNmH6C/jw==: 00:18:32.345 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.345 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:32.345 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.345 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.345 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.345 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.345 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.345 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.603 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:32.603 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.603 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.603 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:32.603 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:32.604 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.604 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.604 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.604 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.604 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.604 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.604 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.604 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.171 00:18:33.171 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.171 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.171 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.430 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.430 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.430 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.430 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.430 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.430 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.430 { 00:18:33.430 "cntlid": 141, 00:18:33.430 "qid": 0, 00:18:33.430 "state": "enabled", 00:18:33.430 "thread": "nvmf_tgt_poll_group_000", 00:18:33.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:33.430 "listen_address": { 00:18:33.430 "trtype": "TCP", 00:18:33.430 "adrfam": "IPv4", 00:18:33.430 "traddr": "10.0.0.2", 00:18:33.430 "trsvcid": "4420" 00:18:33.430 }, 00:18:33.430 "peer_address": { 00:18:33.430 "trtype": "TCP", 00:18:33.430 "adrfam": "IPv4", 00:18:33.430 "traddr": "10.0.0.1", 00:18:33.430 "trsvcid": "40726" 00:18:33.430 }, 00:18:33.430 "auth": { 00:18:33.430 "state": "completed", 00:18:33.430 "digest": "sha512", 00:18:33.430 "dhgroup": "ffdhe8192" 00:18:33.430 } 00:18:33.430 } 00:18:33.430 ]' 00:18:33.430 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.430 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.430 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.430 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:33.430 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.430 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.430 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.430 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.689 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:18:33.689 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:01:ZjhkYzNlMTE3MmJiZjZjYjZjZDAyMTFhY2QzYzg4ZjSJtmTv: 00:18:34.254 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.254 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:34.254 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.254 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.254 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.254 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.254 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:34.254 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:34.512 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:34.512 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.512 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.512 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:34.512 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.512 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.512 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:34.512 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.512 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.512 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.512 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.512 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.512 15:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.078 00:18:35.078 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.078 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.078 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.078 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.078 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.078 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.078 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.078 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.078 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.078 { 00:18:35.078 "cntlid": 143, 00:18:35.078 "qid": 0, 00:18:35.078 "state": "enabled", 00:18:35.078 "thread": "nvmf_tgt_poll_group_000", 00:18:35.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:35.078 "listen_address": { 00:18:35.078 "trtype": "TCP", 00:18:35.078 "adrfam": "IPv4", 00:18:35.078 "traddr": "10.0.0.2", 00:18:35.078 "trsvcid": "4420" 00:18:35.078 }, 00:18:35.078 "peer_address": { 00:18:35.078 "trtype": "TCP", 00:18:35.078 "adrfam": "IPv4", 00:18:35.078 "traddr": "10.0.0.1", 00:18:35.078 "trsvcid": "41486" 00:18:35.078 }, 00:18:35.078 "auth": { 00:18:35.078 "state": "completed", 00:18:35.078 "digest": "sha512", 00:18:35.078 "dhgroup": "ffdhe8192" 00:18:35.078 } 00:18:35.078 } 00:18:35.078 ]' 00:18:35.078 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.336 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.336 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.336 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.336 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.336 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.336 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.336 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.595 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:35.595 15:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.160 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.724 00:18:36.724 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.724 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.724 15:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.982 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.982 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.982 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.982 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.982 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.982 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.982 { 00:18:36.982 "cntlid": 145, 00:18:36.982 "qid": 0, 00:18:36.982 "state": "enabled", 00:18:36.982 "thread": "nvmf_tgt_poll_group_000", 00:18:36.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:36.982 "listen_address": { 00:18:36.982 "trtype": "TCP", 00:18:36.982 "adrfam": "IPv4", 00:18:36.982 "traddr": "10.0.0.2", 00:18:36.982 "trsvcid": "4420" 00:18:36.982 }, 00:18:36.982 "peer_address": { 00:18:36.982 "trtype": "TCP", 00:18:36.982 "adrfam": "IPv4", 00:18:36.982 "traddr": "10.0.0.1", 00:18:36.982 "trsvcid": "41514" 00:18:36.982 }, 00:18:36.982 "auth": { 00:18:36.982 "state": "completed", 00:18:36.982 "digest": "sha512", 00:18:36.982 "dhgroup": "ffdhe8192" 00:18:36.983 } 00:18:36.983 } 00:18:36.983 ]' 00:18:36.983 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.983 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.983 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.983 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.983 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.983 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.983 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.983 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.240 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:18:37.240 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODI3NmM2N2Y2ZGNkNTQ3NGNjZTU3N2ZhODUxOGRkODY3YjBhNjkwMmRlNDIzMDAxWrfeRQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ2MTBhNDc0OWIwYmFiNGRjOThlYjgzNjNjMWFhZmRmYzg2MzFjODE5YzQ0ZTY1NjJjMDUzMWNkNzZkMjA3Oa18HCk=: 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:37.807 15:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:38.373 request: 00:18:38.373 { 00:18:38.373 "name": "nvme0", 00:18:38.373 "trtype": "tcp", 00:18:38.373 "traddr": "10.0.0.2", 00:18:38.373 "adrfam": "ipv4", 00:18:38.373 "trsvcid": "4420", 00:18:38.373 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:38.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:38.373 "prchk_reftag": false, 00:18:38.373 "prchk_guard": false, 00:18:38.373 "hdgst": false, 00:18:38.373 "ddgst": false, 00:18:38.373 "dhchap_key": "key2", 00:18:38.373 "allow_unrecognized_csi": false, 00:18:38.373 "method": "bdev_nvme_attach_controller", 00:18:38.373 "req_id": 1 00:18:38.373 } 00:18:38.373 Got JSON-RPC error response 00:18:38.373 response: 00:18:38.373 { 00:18:38.373 "code": -5, 00:18:38.373 "message": "Input/output error" 00:18:38.373 } 00:18:38.373 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:38.373 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:38.373 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:38.373 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:38.373 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:38.373 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.373 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:38.374 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:38.941 request: 00:18:38.941 { 00:18:38.941 "name": "nvme0", 00:18:38.941 "trtype": "tcp", 00:18:38.941 "traddr": "10.0.0.2", 00:18:38.941 "adrfam": "ipv4", 00:18:38.941 "trsvcid": "4420", 00:18:38.941 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:38.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:38.941 "prchk_reftag": false, 00:18:38.941 "prchk_guard": false, 00:18:38.941 "hdgst": false, 00:18:38.941 "ddgst": false, 00:18:38.941 "dhchap_key": "key1", 00:18:38.941 "dhchap_ctrlr_key": "ckey2", 00:18:38.941 "allow_unrecognized_csi": false, 00:18:38.941 "method": "bdev_nvme_attach_controller", 00:18:38.941 "req_id": 1 00:18:38.941 } 00:18:38.941 Got JSON-RPC error response 00:18:38.941 response: 00:18:38.941 { 00:18:38.941 "code": -5, 00:18:38.941 "message": "Input/output error" 00:18:38.941 } 00:18:38.941 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:38.941 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:38.941 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:38.941 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:38.941 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:38.941 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.941 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.941 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.941 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:38.941 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.941 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.941 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.942 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.942 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:38.942 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.942 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:38.942 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.942 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:38.942 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.942 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.942 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.942 15:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.200 request: 00:18:39.200 { 00:18:39.200 "name": "nvme0", 00:18:39.200 "trtype": "tcp", 00:18:39.200 "traddr": "10.0.0.2", 00:18:39.200 "adrfam": "ipv4", 00:18:39.200 "trsvcid": "4420", 00:18:39.200 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:39.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:39.200 "prchk_reftag": false, 00:18:39.200 "prchk_guard": false, 00:18:39.200 "hdgst": false, 00:18:39.200 "ddgst": false, 00:18:39.200 "dhchap_key": "key1", 00:18:39.200 "dhchap_ctrlr_key": "ckey1", 00:18:39.200 "allow_unrecognized_csi": false, 00:18:39.200 "method": "bdev_nvme_attach_controller", 00:18:39.200 "req_id": 1 00:18:39.200 } 00:18:39.200 Got JSON-RPC error response 00:18:39.200 response: 00:18:39.200 { 00:18:39.200 "code": -5, 00:18:39.200 "message": "Input/output error" 00:18:39.200 } 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2416800 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2416800 ']' 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2416800 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2416800 00:18:39.200 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2416800' 00:18:39.459 killing process with pid 2416800 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2416800 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2416800 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=2438801 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 2438801 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2438801 ']' 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.459 15:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.397 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.397 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:40.397 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:40.397 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:40.397 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.397 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.397 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:40.397 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2438801 00:18:40.397 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2438801 ']' 00:18:40.397 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.397 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:40.397 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.397 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:40.397 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.656 null0 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pHb 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.LtL ]] 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LtL 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ZRl 00:18:40.656 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.1pq ]] 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1pq 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Okc 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.nWL ]] 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nWL 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.wmR 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.916 15:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.484 nvme0n1 00:18:41.484 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.484 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.484 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.742 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.742 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.742 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.742 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.742 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.742 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.742 { 00:18:41.742 "cntlid": 1, 00:18:41.742 "qid": 0, 00:18:41.742 "state": "enabled", 00:18:41.742 "thread": "nvmf_tgt_poll_group_000", 00:18:41.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:41.742 "listen_address": { 00:18:41.742 "trtype": "TCP", 00:18:41.742 "adrfam": "IPv4", 00:18:41.742 "traddr": "10.0.0.2", 00:18:41.742 "trsvcid": "4420" 00:18:41.742 }, 00:18:41.742 "peer_address": { 00:18:41.742 "trtype": "TCP", 00:18:41.742 "adrfam": "IPv4", 00:18:41.742 "traddr": "10.0.0.1", 00:18:41.742 "trsvcid": "41578" 00:18:41.742 }, 00:18:41.742 "auth": { 00:18:41.742 "state": "completed", 00:18:41.742 "digest": "sha512", 00:18:41.742 "dhgroup": "ffdhe8192" 00:18:41.742 } 00:18:41.742 } 00:18:41.742 ]' 00:18:41.742 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.742 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.742 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.001 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.001 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.001 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.001 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.001 15:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.001 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:42.001 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:42.569 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.569 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:42.569 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.569 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.569 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.569 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:42.569 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.569 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.828 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.828 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:42.828 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:42.828 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:42.828 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:42.828 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:42.828 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:42.828 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:42.828 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:42.828 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:42.828 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:42.828 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.828 15:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.087 request: 00:18:43.087 { 00:18:43.087 "name": "nvme0", 00:18:43.087 "trtype": "tcp", 00:18:43.087 "traddr": "10.0.0.2", 00:18:43.087 "adrfam": "ipv4", 00:18:43.087 "trsvcid": "4420", 00:18:43.087 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:43.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:43.087 "prchk_reftag": false, 00:18:43.087 "prchk_guard": false, 00:18:43.087 "hdgst": false, 00:18:43.087 "ddgst": false, 00:18:43.087 "dhchap_key": "key3", 00:18:43.087 "allow_unrecognized_csi": false, 00:18:43.087 "method": "bdev_nvme_attach_controller", 00:18:43.087 "req_id": 1 00:18:43.087 } 00:18:43.087 Got JSON-RPC error response 00:18:43.087 response: 00:18:43.087 { 00:18:43.087 "code": -5, 00:18:43.087 "message": "Input/output error" 00:18:43.087 } 00:18:43.087 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:43.087 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:43.087 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:43.087 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:43.087 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:43.087 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:43.087 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:43.087 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:43.346 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:43.346 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:43.347 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:43.347 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:43.347 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.347 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:43.347 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.347 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:43.347 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.347 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.605 request: 00:18:43.606 { 00:18:43.606 "name": "nvme0", 00:18:43.606 "trtype": "tcp", 00:18:43.606 "traddr": "10.0.0.2", 00:18:43.606 "adrfam": "ipv4", 00:18:43.606 "trsvcid": "4420", 00:18:43.606 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:43.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:43.606 "prchk_reftag": false, 00:18:43.606 "prchk_guard": false, 00:18:43.606 "hdgst": false, 00:18:43.606 "ddgst": false, 00:18:43.606 "dhchap_key": "key3", 00:18:43.606 "allow_unrecognized_csi": false, 00:18:43.606 "method": "bdev_nvme_attach_controller", 00:18:43.606 "req_id": 1 00:18:43.606 } 00:18:43.606 Got JSON-RPC error response 00:18:43.606 response: 00:18:43.606 { 00:18:43.606 "code": -5, 00:18:43.606 "message": "Input/output error" 00:18:43.606 } 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.606 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:43.864 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.864 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.864 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.864 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:43.864 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:43.864 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:43.864 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:43.864 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.864 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:43.864 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.864 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:43.864 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:43.864 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:44.123 request: 00:18:44.123 { 00:18:44.123 "name": "nvme0", 00:18:44.123 "trtype": "tcp", 00:18:44.123 "traddr": "10.0.0.2", 00:18:44.123 "adrfam": "ipv4", 00:18:44.123 "trsvcid": "4420", 00:18:44.123 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:44.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:44.123 "prchk_reftag": false, 00:18:44.123 "prchk_guard": false, 00:18:44.123 "hdgst": false, 00:18:44.123 "ddgst": false, 00:18:44.123 "dhchap_key": "key0", 00:18:44.123 "dhchap_ctrlr_key": "key1", 00:18:44.123 "allow_unrecognized_csi": false, 00:18:44.123 "method": "bdev_nvme_attach_controller", 00:18:44.123 "req_id": 1 00:18:44.123 } 00:18:44.123 Got JSON-RPC error response 00:18:44.123 response: 00:18:44.123 { 00:18:44.123 "code": -5, 00:18:44.123 "message": "Input/output error" 00:18:44.123 } 00:18:44.123 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:44.123 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:44.123 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:44.123 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:44.123 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:44.123 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:44.123 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:44.382 nvme0n1 00:18:44.382 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:44.382 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:44.382 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.640 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.640 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.641 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.641 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:44.641 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.641 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.641 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.641 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:44.641 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:44.641 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:45.576 nvme0n1 00:18:45.576 15:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:45.576 15:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:45.576 15:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.576 15:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.576 15:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:45.576 15:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.576 15:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.835 15:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.835 15:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:45.835 15:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:45.835 15:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.835 15:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.835 15:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:45.835 15:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: --dhchap-ctrl-secret DHHC-1:03:NThiNzBiOTEzZDVkZDQ4YTNjNzc4MWY1Y2FkMWNiOWVmNTI5MTM5YTY0ZTRmM2QwN2VlZTkyNWFlZGViN2RmMjqtZ2A=: 00:18:46.400 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:46.400 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:46.400 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:46.400 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:46.400 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:46.400 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:46.400 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:46.401 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.401 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.658 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:46.658 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:46.658 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:46.658 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:46.658 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:46.658 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:46.658 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:46.658 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:46.658 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:46.658 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:47.225 request: 00:18:47.225 { 00:18:47.225 "name": "nvme0", 00:18:47.225 "trtype": "tcp", 00:18:47.225 "traddr": "10.0.0.2", 00:18:47.225 "adrfam": "ipv4", 00:18:47.225 "trsvcid": "4420", 00:18:47.225 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:47.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:47.225 "prchk_reftag": false, 00:18:47.225 "prchk_guard": false, 00:18:47.225 "hdgst": false, 00:18:47.225 "ddgst": false, 00:18:47.225 "dhchap_key": "key1", 00:18:47.225 "allow_unrecognized_csi": false, 00:18:47.225 "method": "bdev_nvme_attach_controller", 00:18:47.225 "req_id": 1 00:18:47.225 } 00:18:47.225 Got JSON-RPC error response 00:18:47.225 response: 00:18:47.225 { 00:18:47.225 "code": -5, 00:18:47.225 "message": "Input/output error" 00:18:47.225 } 00:18:47.225 15:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:47.225 15:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:47.225 15:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:47.225 15:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:47.225 15:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.225 15:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.225 15:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.791 nvme0n1 00:18:47.791 15:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:47.791 15:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:47.791 15:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.050 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.050 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.050 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.312 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:48.312 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.312 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.312 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.312 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:48.312 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:48.312 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:48.572 nvme0n1 00:18:48.572 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:48.572 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:48.572 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.572 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.572 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.572 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: '' 2s 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: ]] 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Mzg4NzAzMDM3YTkzYjU3YmFhNGIxNmMyM2NmYjNiYjCZVTio: 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:48.831 15:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: 2s 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: ]] 00:18:51.365 15:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NWIyNzBjMzcxMjUxY2EwNTM2YzBjOWM4NDVmZDNmYjAwMjcyNjg2ZDM0YWRmMmMykbgBgA==: 00:18:51.365 15:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:51.365 15:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:53.271 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:53.840 nvme0n1 00:18:53.840 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:53.840 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.840 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.840 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.840 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:53.840 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:54.409 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:54.409 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:54.409 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.409 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.409 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:54.409 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.409 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.409 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.409 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:54.409 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:54.669 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:54.669 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:54.669 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.928 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.928 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:54.928 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.928 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.928 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.928 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:54.928 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:54.928 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:54.928 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:54.928 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.928 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:54.928 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.928 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:54.928 15:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:55.187 request: 00:18:55.187 { 00:18:55.187 "name": "nvme0", 00:18:55.187 "dhchap_key": "key1", 00:18:55.187 "dhchap_ctrlr_key": "key3", 00:18:55.187 "method": "bdev_nvme_set_keys", 00:18:55.187 "req_id": 1 00:18:55.187 } 00:18:55.187 Got JSON-RPC error response 00:18:55.187 response: 00:18:55.187 { 00:18:55.187 "code": -13, 00:18:55.187 "message": "Permission denied" 00:18:55.187 } 00:18:55.187 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:55.187 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:55.187 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:55.187 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:55.187 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:55.187 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:55.187 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.446 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:55.446 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:56.824 15:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:56.824 15:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:56.824 15:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.824 15:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:56.824 15:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:56.824 15:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.824 15:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.824 15:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.824 15:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:56.824 15:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:56.824 15:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:57.392 nvme0n1 00:18:57.392 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:57.392 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.392 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.392 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.392 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:57.392 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:57.392 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:57.392 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:57.392 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:57.392 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:57.392 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:57.392 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:57.392 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:57.959 request: 00:18:57.959 { 00:18:57.959 "name": "nvme0", 00:18:57.959 "dhchap_key": "key2", 00:18:57.959 "dhchap_ctrlr_key": "key0", 00:18:57.959 "method": "bdev_nvme_set_keys", 00:18:57.959 "req_id": 1 00:18:57.959 } 00:18:57.959 Got JSON-RPC error response 00:18:57.959 response: 00:18:57.959 { 00:18:57.959 "code": -13, 00:18:57.959 "message": "Permission denied" 00:18:57.959 } 00:18:57.959 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:57.959 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:57.959 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:57.959 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:57.959 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:57.959 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:57.959 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.217 15:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:58.217 15:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:59.153 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:59.153 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:59.153 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.411 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:59.411 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:59.411 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:59.411 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2416934 00:18:59.411 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2416934 ']' 00:18:59.411 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2416934 00:18:59.411 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:59.411 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.411 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2416934 00:18:59.411 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:59.411 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:59.411 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2416934' 00:18:59.411 killing process with pid 2416934 00:18:59.411 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2416934 00:18:59.411 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2416934 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:59.669 rmmod nvme_tcp 00:18:59.669 rmmod nvme_fabrics 00:18:59.669 rmmod nvme_keyring 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 2438801 ']' 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 2438801 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2438801 ']' 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2438801 00:18:59.669 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:59.928 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.928 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2438801 00:18:59.928 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:59.928 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:59.928 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2438801' 00:18:59.928 killing process with pid 2438801 00:18:59.928 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2438801 00:18:59.928 15:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2438801 00:18:59.928 15:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:59.928 15:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:59.928 15:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:59.928 15:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:59.928 15:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:18:59.928 15:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:59.928 15:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:19:00.186 15:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:00.186 15:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:00.186 15:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.186 15:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.186 15:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.115 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:02.115 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.pHb /tmp/spdk.key-sha256.ZRl /tmp/spdk.key-sha384.Okc /tmp/spdk.key-sha512.wmR /tmp/spdk.key-sha512.LtL /tmp/spdk.key-sha384.1pq /tmp/spdk.key-sha256.nWL '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:02.115 00:19:02.115 real 2m32.488s 00:19:02.115 user 5m50.675s 00:19:02.115 sys 0m24.080s 00:19:02.115 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:02.115 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.115 ************************************ 00:19:02.115 END TEST nvmf_auth_target 00:19:02.115 ************************************ 00:19:02.115 15:53:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:02.115 15:53:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:02.115 15:53:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:02.115 15:53:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:02.115 15:53:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:02.115 ************************************ 00:19:02.115 START TEST nvmf_bdevio_no_huge 00:19:02.115 ************************************ 00:19:02.115 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:02.375 * Looking for test storage... 00:19:02.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:02.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.375 --rc genhtml_branch_coverage=1 00:19:02.375 --rc genhtml_function_coverage=1 00:19:02.375 --rc genhtml_legend=1 00:19:02.375 --rc geninfo_all_blocks=1 00:19:02.375 --rc geninfo_unexecuted_blocks=1 00:19:02.375 00:19:02.375 ' 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:02.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.375 --rc genhtml_branch_coverage=1 00:19:02.375 --rc genhtml_function_coverage=1 00:19:02.375 --rc genhtml_legend=1 00:19:02.375 --rc geninfo_all_blocks=1 00:19:02.375 --rc geninfo_unexecuted_blocks=1 00:19:02.375 00:19:02.375 ' 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:02.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.375 --rc genhtml_branch_coverage=1 00:19:02.375 --rc genhtml_function_coverage=1 00:19:02.375 --rc genhtml_legend=1 00:19:02.375 --rc geninfo_all_blocks=1 00:19:02.375 --rc geninfo_unexecuted_blocks=1 00:19:02.375 00:19:02.375 ' 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:02.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.375 --rc genhtml_branch_coverage=1 00:19:02.375 --rc genhtml_function_coverage=1 00:19:02.375 --rc genhtml_legend=1 00:19:02.375 --rc geninfo_all_blocks=1 00:19:02.375 --rc geninfo_unexecuted_blocks=1 00:19:02.375 00:19:02.375 ' 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.375 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:02.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:02.376 15:53:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:08.947 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:08.948 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:08.948 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:08.948 Found net devices under 0000:86:00.0: cvl_0_0 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:08.948 Found net devices under 0000:86:00.1: cvl_0_1 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:08.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:19:08.948 00:19:08.948 --- 10.0.0.2 ping statistics --- 00:19:08.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.948 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:19:08.948 00:19:08.948 --- 10.0.0.1 ping statistics --- 00:19:08.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.948 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=2445798 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 2445798 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2445798 ']' 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:08.948 15:53:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:08.948 [2024-10-01 15:53:18.468857] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:19:08.948 [2024-10-01 15:53:18.468914] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:08.948 [2024-10-01 15:53:18.542381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:08.948 [2024-10-01 15:53:18.626421] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.948 [2024-10-01 15:53:18.626458] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.948 [2024-10-01 15:53:18.626466] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.948 [2024-10-01 15:53:18.626472] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.948 [2024-10-01 15:53:18.626477] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.948 [2024-10-01 15:53:18.626602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:19:08.948 [2024-10-01 15:53:18.626710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:19:08.948 [2024-10-01 15:53:18.626819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:19:08.948 [2024-10-01 15:53:18.626824] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.207 [2024-10-01 15:53:19.343722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.207 Malloc0 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.207 [2024-10-01 15:53:19.388041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:19:09.207 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:19:09.207 { 00:19:09.207 "params": { 00:19:09.207 "name": "Nvme$subsystem", 00:19:09.207 "trtype": "$TEST_TRANSPORT", 00:19:09.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.207 "adrfam": "ipv4", 00:19:09.207 "trsvcid": "$NVMF_PORT", 00:19:09.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.207 "hdgst": ${hdgst:-false}, 00:19:09.207 "ddgst": ${ddgst:-false} 00:19:09.207 }, 00:19:09.207 "method": "bdev_nvme_attach_controller" 00:19:09.207 } 00:19:09.207 EOF 00:19:09.207 )") 00:19:09.465 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:19:09.465 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:19:09.465 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:19:09.465 15:53:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:19:09.465 "params": { 00:19:09.465 "name": "Nvme1", 00:19:09.465 "trtype": "tcp", 00:19:09.465 "traddr": "10.0.0.2", 00:19:09.465 "adrfam": "ipv4", 00:19:09.465 "trsvcid": "4420", 00:19:09.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.465 "hdgst": false, 00:19:09.465 "ddgst": false 00:19:09.465 }, 00:19:09.465 "method": "bdev_nvme_attach_controller" 00:19:09.465 }' 00:19:09.465 [2024-10-01 15:53:19.439697] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:19:09.465 [2024-10-01 15:53:19.439743] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2445942 ] 00:19:09.465 [2024-10-01 15:53:19.510849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:09.465 [2024-10-01 15:53:19.597622] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.465 [2024-10-01 15:53:19.597748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.465 [2024-10-01 15:53:19.597749] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.723 I/O targets: 00:19:09.723 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:09.723 00:19:09.723 00:19:09.723 CUnit - A unit testing framework for C - Version 2.1-3 00:19:09.723 http://cunit.sourceforge.net/ 00:19:09.723 00:19:09.723 00:19:09.723 Suite: bdevio tests on: Nvme1n1 00:19:09.980 Test: blockdev write read block ...passed 00:19:09.980 Test: blockdev write zeroes read block ...passed 00:19:09.980 Test: blockdev write zeroes read no split ...passed 00:19:09.980 Test: blockdev write zeroes read split ...passed 00:19:09.980 Test: blockdev write zeroes read split partial ...passed 00:19:09.980 Test: blockdev reset ...[2024-10-01 15:53:20.011661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:09.980 [2024-10-01 15:53:20.011730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c039f0 (9): Bad file descriptor 00:19:09.980 [2024-10-01 15:53:20.066474] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:09.980 passed 00:19:09.980 Test: blockdev write read 8 blocks ...passed 00:19:09.980 Test: blockdev write read size > 128k ...passed 00:19:09.980 Test: blockdev write read invalid size ...passed 00:19:09.980 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:09.980 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:09.980 Test: blockdev write read max offset ...passed 00:19:10.238 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:10.238 Test: blockdev writev readv 8 blocks ...passed 00:19:10.238 Test: blockdev writev readv 30 x 1block ...passed 00:19:10.238 Test: blockdev writev readv block ...passed 00:19:10.238 Test: blockdev writev readv size > 128k ...passed 00:19:10.238 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:10.238 Test: blockdev comparev and writev ...[2024-10-01 15:53:20.319845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.238 [2024-10-01 15:53:20.319876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:10.238 [2024-10-01 15:53:20.319890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.238 [2024-10-01 15:53:20.319898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:10.238 [2024-10-01 15:53:20.320132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.238 [2024-10-01 15:53:20.320143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:10.238 [2024-10-01 15:53:20.320158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.238 [2024-10-01 15:53:20.320166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:10.238 [2024-10-01 15:53:20.320402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.238 [2024-10-01 15:53:20.320413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:10.238 [2024-10-01 15:53:20.320426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.238 [2024-10-01 15:53:20.320433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:10.238 [2024-10-01 15:53:20.320668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.238 [2024-10-01 15:53:20.320679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:10.238 [2024-10-01 15:53:20.320690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.238 [2024-10-01 15:53:20.320697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:10.238 passed 00:19:10.238 Test: blockdev nvme passthru rw ...passed 00:19:10.238 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:53:20.402245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:10.238 [2024-10-01 15:53:20.402265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:10.238 [2024-10-01 15:53:20.402371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:10.238 [2024-10-01 15:53:20.402381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:10.238 [2024-10-01 15:53:20.402481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:10.238 [2024-10-01 15:53:20.402492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:10.238 [2024-10-01 15:53:20.402588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:10.238 [2024-10-01 15:53:20.402598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:10.238 passed 00:19:10.238 Test: blockdev nvme admin passthru ...passed 00:19:10.538 Test: blockdev copy ...passed 00:19:10.538 00:19:10.538 Run Summary: Type Total Ran Passed Failed Inactive 00:19:10.538 suites 1 1 n/a 0 0 00:19:10.538 tests 23 23 23 0 0 00:19:10.538 asserts 152 152 152 0 n/a 00:19:10.538 00:19:10.538 Elapsed time = 1.140 seconds 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:10.796 rmmod nvme_tcp 00:19:10.796 rmmod nvme_fabrics 00:19:10.796 rmmod nvme_keyring 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 2445798 ']' 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 2445798 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2445798 ']' 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2445798 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2445798 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2445798' 00:19:10.796 killing process with pid 2445798 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2445798 00:19:10.796 15:53:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2445798 00:19:11.055 15:53:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:11.055 15:53:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:11.055 15:53:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:11.055 15:53:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:11.055 15:53:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:19:11.055 15:53:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:19:11.055 15:53:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:11.055 15:53:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.055 15:53:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:11.055 15:53:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.055 15:53:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.055 15:53:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:13.587 00:19:13.587 real 0m11.003s 00:19:13.587 user 0m14.238s 00:19:13.587 sys 0m5.382s 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:13.587 ************************************ 00:19:13.587 END TEST nvmf_bdevio_no_huge 00:19:13.587 ************************************ 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.587 ************************************ 00:19:13.587 START TEST nvmf_tls 00:19:13.587 ************************************ 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:13.587 * Looking for test storage... 00:19:13.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.587 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:13.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.588 --rc genhtml_branch_coverage=1 00:19:13.588 --rc genhtml_function_coverage=1 00:19:13.588 --rc genhtml_legend=1 00:19:13.588 --rc geninfo_all_blocks=1 00:19:13.588 --rc geninfo_unexecuted_blocks=1 00:19:13.588 00:19:13.588 ' 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:13.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.588 --rc genhtml_branch_coverage=1 00:19:13.588 --rc genhtml_function_coverage=1 00:19:13.588 --rc genhtml_legend=1 00:19:13.588 --rc geninfo_all_blocks=1 00:19:13.588 --rc geninfo_unexecuted_blocks=1 00:19:13.588 00:19:13.588 ' 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:13.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.588 --rc genhtml_branch_coverage=1 00:19:13.588 --rc genhtml_function_coverage=1 00:19:13.588 --rc genhtml_legend=1 00:19:13.588 --rc geninfo_all_blocks=1 00:19:13.588 --rc geninfo_unexecuted_blocks=1 00:19:13.588 00:19:13.588 ' 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:13.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.588 --rc genhtml_branch_coverage=1 00:19:13.588 --rc genhtml_function_coverage=1 00:19:13.588 --rc genhtml_legend=1 00:19:13.588 --rc geninfo_all_blocks=1 00:19:13.588 --rc geninfo_unexecuted_blocks=1 00:19:13.588 00:19:13.588 ' 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.588 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:13.589 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:20.232 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:20.232 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:19:20.232 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:20.233 Found net devices under 0000:86:00.0: cvl_0_0 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:20.233 Found net devices under 0000:86:00.1: cvl_0_1 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:20.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:19:20.233 00:19:20.233 --- 10.0.0.2 ping statistics --- 00:19:20.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.233 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:19:20.233 00:19:20.233 --- 10.0.0.1 ping statistics --- 00:19:20.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.233 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2449790 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2449790 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2449790 ']' 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:20.233 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.233 [2024-10-01 15:53:29.602957] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:19:20.233 [2024-10-01 15:53:29.603004] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.233 [2024-10-01 15:53:29.675439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.233 [2024-10-01 15:53:29.753080] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.233 [2024-10-01 15:53:29.753116] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.233 [2024-10-01 15:53:29.753123] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.233 [2024-10-01 15:53:29.753129] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.233 [2024-10-01 15:53:29.753134] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.233 [2024-10-01 15:53:29.753152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.542 15:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.542 15:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:20.542 15:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:20.542 15:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:20.542 15:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.542 15:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.542 15:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:20.542 15:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:20.542 true 00:19:20.542 15:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:20.542 15:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:20.802 15:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:20.802 15:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:20.802 15:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:21.061 15:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:21.061 15:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:21.319 15:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:21.319 15:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:21.319 15:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:21.319 15:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:21.319 15:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:21.578 15:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:21.578 15:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:21.578 15:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:21.578 15:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:21.837 15:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:21.837 15:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:21.837 15:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:21.837 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:21.837 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:22.097 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:22.097 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:22.097 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:22.356 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:22.356 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Iegb3MmDVX 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.o5N4bMdMSa 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Iegb3MmDVX 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.o5N4bMdMSa 00:19:22.615 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:22.873 15:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:23.132 15:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Iegb3MmDVX 00:19:23.132 15:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Iegb3MmDVX 00:19:23.132 15:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:23.132 [2024-10-01 15:53:33.268737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.132 15:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:23.391 15:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:23.650 [2024-10-01 15:53:33.637692] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:23.650 [2024-10-01 15:53:33.637932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.650 15:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:23.650 malloc0 00:19:23.910 15:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:23.910 15:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Iegb3MmDVX 00:19:24.169 15:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:24.427 15:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Iegb3MmDVX 00:19:34.403 Initializing NVMe Controllers 00:19:34.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:34.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:34.403 Initialization complete. Launching workers. 00:19:34.403 ======================================================== 00:19:34.403 Latency(us) 00:19:34.403 Device Information : IOPS MiB/s Average min max 00:19:34.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16917.18 66.08 3783.23 880.85 5763.09 00:19:34.403 ======================================================== 00:19:34.403 Total : 16917.18 66.08 3783.23 880.85 5763.09 00:19:34.403 00:19:34.403 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Iegb3MmDVX 00:19:34.403 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:34.403 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:34.403 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:34.403 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Iegb3MmDVX 00:19:34.403 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.403 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2452184 00:19:34.403 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:34.403 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:34.404 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2452184 /var/tmp/bdevperf.sock 00:19:34.404 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2452184 ']' 00:19:34.404 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.404 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.404 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.404 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.404 15:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.404 [2024-10-01 15:53:44.541923] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:19:34.404 [2024-10-01 15:53:44.541969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2452184 ] 00:19:34.663 [2024-10-01 15:53:44.609119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.663 [2024-10-01 15:53:44.679401] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.232 15:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.232 15:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:35.232 15:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Iegb3MmDVX 00:19:35.491 15:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:35.750 [2024-10-01 15:53:45.715121] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.750 TLSTESTn1 00:19:35.750 15:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:35.750 Running I/O for 10 seconds... 00:19:45.991 5286.00 IOPS, 20.65 MiB/s 5506.00 IOPS, 21.51 MiB/s 5557.00 IOPS, 21.71 MiB/s 5548.00 IOPS, 21.67 MiB/s 5536.20 IOPS, 21.63 MiB/s 5499.50 IOPS, 21.48 MiB/s 5423.71 IOPS, 21.19 MiB/s 5356.50 IOPS, 20.92 MiB/s 5293.33 IOPS, 20.68 MiB/s 5280.40 IOPS, 20.63 MiB/s 00:19:45.991 Latency(us) 00:19:45.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.991 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:45.991 Verification LBA range: start 0x0 length 0x2000 00:19:45.991 TLSTESTn1 : 10.02 5284.41 20.64 0.00 0.00 24186.71 5960.66 49183.21 00:19:45.991 =================================================================================================================== 00:19:45.992 Total : 5284.41 20.64 0.00 0.00 24186.71 5960.66 49183.21 00:19:45.992 { 00:19:45.992 "results": [ 00:19:45.992 { 00:19:45.992 "job": "TLSTESTn1", 00:19:45.992 "core_mask": "0x4", 00:19:45.992 "workload": "verify", 00:19:45.992 "status": "finished", 00:19:45.992 "verify_range": { 00:19:45.992 "start": 0, 00:19:45.992 "length": 8192 00:19:45.992 }, 00:19:45.992 "queue_depth": 128, 00:19:45.992 "io_size": 4096, 00:19:45.992 "runtime": 10.016445, 00:19:45.992 "iops": 5284.409788103464, 00:19:45.992 "mibps": 20.642225734779156, 00:19:45.992 "io_failed": 0, 00:19:45.992 "io_timeout": 0, 00:19:45.992 "avg_latency_us": 24186.708963457368, 00:19:45.992 "min_latency_us": 5960.655238095238, 00:19:45.992 "max_latency_us": 49183.20761904762 00:19:45.992 } 00:19:45.992 ], 00:19:45.992 "core_count": 1 00:19:45.992 } 00:19:45.992 15:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:45.992 15:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2452184 00:19:45.992 15:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2452184 ']' 00:19:45.992 15:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2452184 00:19:45.992 15:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:45.992 15:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:45.992 15:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2452184 00:19:45.992 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:45.992 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:45.992 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2452184' 00:19:45.992 killing process with pid 2452184 00:19:45.992 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2452184 00:19:45.992 Received shutdown signal, test time was about 10.000000 seconds 00:19:45.992 00:19:45.992 Latency(us) 00:19:45.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.992 =================================================================================================================== 00:19:45.992 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.992 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2452184 00:19:46.251 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o5N4bMdMSa 00:19:46.251 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:46.251 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o5N4bMdMSa 00:19:46.251 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:46.251 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:46.251 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:46.251 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o5N4bMdMSa 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.o5N4bMdMSa 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2454058 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2454058 /var/tmp/bdevperf.sock 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2454058 ']' 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.252 15:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:46.252 [2024-10-01 15:53:56.241292] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:19:46.252 [2024-10-01 15:53:56.241339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2454058 ] 00:19:46.252 [2024-10-01 15:53:56.311259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.252 [2024-10-01 15:53:56.389718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.188 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.188 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:47.188 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o5N4bMdMSa 00:19:47.188 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:47.447 [2024-10-01 15:53:57.413560] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.447 [2024-10-01 15:53:57.418306] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:47.447 [2024-10-01 15:53:57.418935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ad220 (107): Transport endpoint is not connected 00:19:47.447 [2024-10-01 15:53:57.419927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ad220 (9): Bad file descriptor 00:19:47.447 [2024-10-01 15:53:57.420929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:47.447 [2024-10-01 15:53:57.420941] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:47.447 [2024-10-01 15:53:57.420949] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:47.447 [2024-10-01 15:53:57.420960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:47.447 request: 00:19:47.447 { 00:19:47.447 "name": "TLSTEST", 00:19:47.447 "trtype": "tcp", 00:19:47.447 "traddr": "10.0.0.2", 00:19:47.447 "adrfam": "ipv4", 00:19:47.447 "trsvcid": "4420", 00:19:47.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.447 "prchk_reftag": false, 00:19:47.447 "prchk_guard": false, 00:19:47.447 "hdgst": false, 00:19:47.447 "ddgst": false, 00:19:47.447 "psk": "key0", 00:19:47.447 "allow_unrecognized_csi": false, 00:19:47.447 "method": "bdev_nvme_attach_controller", 00:19:47.447 "req_id": 1 00:19:47.447 } 00:19:47.447 Got JSON-RPC error response 00:19:47.447 response: 00:19:47.447 { 00:19:47.447 "code": -5, 00:19:47.447 "message": "Input/output error" 00:19:47.447 } 00:19:47.447 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2454058 00:19:47.447 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2454058 ']' 00:19:47.447 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2454058 00:19:47.447 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:47.447 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.447 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2454058 00:19:47.447 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:47.447 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:47.447 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2454058' 00:19:47.447 killing process with pid 2454058 00:19:47.447 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2454058 00:19:47.447 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.447 00:19:47.447 Latency(us) 00:19:47.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.447 =================================================================================================================== 00:19:47.447 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:47.447 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2454058 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Iegb3MmDVX 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Iegb3MmDVX 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Iegb3MmDVX 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Iegb3MmDVX 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2454343 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2454343 /var/tmp/bdevperf.sock 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2454343 ']' 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:47.706 15:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.706 [2024-10-01 15:53:57.718618] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:19:47.706 [2024-10-01 15:53:57.718667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2454343 ] 00:19:47.706 [2024-10-01 15:53:57.787643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.706 [2024-10-01 15:53:57.865965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.641 15:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:48.641 15:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:48.641 15:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Iegb3MmDVX 00:19:48.641 15:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:48.900 [2024-10-01 15:53:58.909323] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.900 [2024-10-01 15:53:58.920287] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:48.900 [2024-10-01 15:53:58.920311] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:48.900 [2024-10-01 15:53:58.920334] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:48.900 [2024-10-01 15:53:58.920664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x67e220 (107): Transport endpoint is not connected 00:19:48.900 [2024-10-01 15:53:58.921658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x67e220 (9): Bad file descriptor 00:19:48.900 [2024-10-01 15:53:58.922660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:48.900 [2024-10-01 15:53:58.922670] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:48.900 [2024-10-01 15:53:58.922678] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:48.900 [2024-10-01 15:53:58.922689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:48.900 request: 00:19:48.900 { 00:19:48.900 "name": "TLSTEST", 00:19:48.900 "trtype": "tcp", 00:19:48.900 "traddr": "10.0.0.2", 00:19:48.900 "adrfam": "ipv4", 00:19:48.900 "trsvcid": "4420", 00:19:48.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.900 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:48.900 "prchk_reftag": false, 00:19:48.900 "prchk_guard": false, 00:19:48.900 "hdgst": false, 00:19:48.900 "ddgst": false, 00:19:48.900 "psk": "key0", 00:19:48.900 "allow_unrecognized_csi": false, 00:19:48.900 "method": "bdev_nvme_attach_controller", 00:19:48.900 "req_id": 1 00:19:48.900 } 00:19:48.900 Got JSON-RPC error response 00:19:48.900 response: 00:19:48.900 { 00:19:48.900 "code": -5, 00:19:48.900 "message": "Input/output error" 00:19:48.900 } 00:19:48.900 15:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2454343 00:19:48.900 15:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2454343 ']' 00:19:48.900 15:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2454343 00:19:48.900 15:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:48.900 15:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.900 15:53:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2454343 00:19:48.900 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:48.900 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:48.900 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2454343' 00:19:48.900 killing process with pid 2454343 00:19:48.900 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2454343 00:19:48.900 Received shutdown signal, test time was about 10.000000 seconds 00:19:48.900 00:19:48.900 Latency(us) 00:19:48.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.900 =================================================================================================================== 00:19:48.900 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:48.900 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2454343 00:19:49.159 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:49.159 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:49.159 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:49.159 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:49.159 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:49.159 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Iegb3MmDVX 00:19:49.159 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:49.159 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Iegb3MmDVX 00:19:49.159 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:49.159 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.159 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:49.159 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.159 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Iegb3MmDVX 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Iegb3MmDVX 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2454639 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2454639 /var/tmp/bdevperf.sock 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2454639 ']' 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.160 15:53:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:49.160 [2024-10-01 15:53:59.229079] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:19:49.160 [2024-10-01 15:53:59.229125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2454639 ] 00:19:49.160 [2024-10-01 15:53:59.297125] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.418 [2024-10-01 15:53:59.375649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.985 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:49.985 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:49.985 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Iegb3MmDVX 00:19:50.243 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:50.502 [2024-10-01 15:54:00.499300] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.502 [2024-10-01 15:54:00.504355] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:50.502 [2024-10-01 15:54:00.504381] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:50.502 [2024-10-01 15:54:00.504411] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:50.502 [2024-10-01 15:54:00.504566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d7220 (107): Transport endpoint is not connected 00:19:50.502 [2024-10-01 15:54:00.505558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d7220 (9): Bad file descriptor 00:19:50.502 [2024-10-01 15:54:00.506559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:50.502 [2024-10-01 15:54:00.506574] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:50.502 [2024-10-01 15:54:00.506581] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:50.502 [2024-10-01 15:54:00.506591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:50.502 request: 00:19:50.502 { 00:19:50.502 "name": "TLSTEST", 00:19:50.502 "trtype": "tcp", 00:19:50.502 "traddr": "10.0.0.2", 00:19:50.502 "adrfam": "ipv4", 00:19:50.502 "trsvcid": "4420", 00:19:50.502 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:50.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.502 "prchk_reftag": false, 00:19:50.502 "prchk_guard": false, 00:19:50.502 "hdgst": false, 00:19:50.502 "ddgst": false, 00:19:50.502 "psk": "key0", 00:19:50.502 "allow_unrecognized_csi": false, 00:19:50.502 "method": "bdev_nvme_attach_controller", 00:19:50.502 "req_id": 1 00:19:50.502 } 00:19:50.502 Got JSON-RPC error response 00:19:50.502 response: 00:19:50.502 { 00:19:50.502 "code": -5, 00:19:50.502 "message": "Input/output error" 00:19:50.502 } 00:19:50.502 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2454639 00:19:50.502 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2454639 ']' 00:19:50.502 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2454639 00:19:50.502 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:50.502 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:50.502 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2454639 00:19:50.502 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:50.502 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:50.502 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2454639' 00:19:50.502 killing process with pid 2454639 00:19:50.502 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2454639 00:19:50.502 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.502 00:19:50.502 Latency(us) 00:19:50.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.502 =================================================================================================================== 00:19:50.502 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:50.502 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2454639 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:50.761 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:50.762 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.762 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2455012 00:19:50.762 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.762 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2455012 /var/tmp/bdevperf.sock 00:19:50.762 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2455012 ']' 00:19:50.762 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.762 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.762 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.762 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.762 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.762 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.762 [2024-10-01 15:54:00.797458] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:19:50.762 [2024-10-01 15:54:00.797506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2455012 ] 00:19:50.762 [2024-10-01 15:54:00.864946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.762 [2024-10-01 15:54:00.943463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.699 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.699 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:51.699 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:51.699 [2024-10-01 15:54:01.799234] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:51.699 [2024-10-01 15:54:01.799263] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:51.699 request: 00:19:51.699 { 00:19:51.699 "name": "key0", 00:19:51.699 "path": "", 00:19:51.699 "method": "keyring_file_add_key", 00:19:51.699 "req_id": 1 00:19:51.699 } 00:19:51.699 Got JSON-RPC error response 00:19:51.699 response: 00:19:51.699 { 00:19:51.699 "code": -1, 00:19:51.699 "message": "Operation not permitted" 00:19:51.699 } 00:19:51.699 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:51.959 [2024-10-01 15:54:01.971771] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.959 [2024-10-01 15:54:01.971808] bdev_nvme.c:6389:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:51.959 request: 00:19:51.959 { 00:19:51.959 "name": "TLSTEST", 00:19:51.959 "trtype": "tcp", 00:19:51.959 "traddr": "10.0.0.2", 00:19:51.959 "adrfam": "ipv4", 00:19:51.959 "trsvcid": "4420", 00:19:51.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.959 "prchk_reftag": false, 00:19:51.959 "prchk_guard": false, 00:19:51.959 "hdgst": false, 00:19:51.959 "ddgst": false, 00:19:51.959 "psk": "key0", 00:19:51.959 "allow_unrecognized_csi": false, 00:19:51.959 "method": "bdev_nvme_attach_controller", 00:19:51.959 "req_id": 1 00:19:51.959 } 00:19:51.959 Got JSON-RPC error response 00:19:51.959 response: 00:19:51.959 { 00:19:51.959 "code": -126, 00:19:51.959 "message": "Required key not available" 00:19:51.959 } 00:19:51.959 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2455012 00:19:51.959 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2455012 ']' 00:19:51.959 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2455012 00:19:51.959 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:51.959 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:51.959 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2455012 00:19:51.959 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:51.959 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:51.959 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2455012' 00:19:51.959 killing process with pid 2455012 00:19:51.959 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2455012 00:19:51.959 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.959 00:19:51.959 Latency(us) 00:19:51.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.959 =================================================================================================================== 00:19:51.959 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.959 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2455012 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2449790 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2449790 ']' 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2449790 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2449790 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2449790' 00:19:52.218 killing process with pid 2449790 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2449790 00:19:52.218 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2449790 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ktOg8wkXvb 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ktOg8wkXvb 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2455345 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2455345 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2455345 ']' 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.478 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.479 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.479 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.479 15:54:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.479 [2024-10-01 15:54:02.573063] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:19:52.479 [2024-10-01 15:54:02.573110] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.479 [2024-10-01 15:54:02.642923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.738 [2024-10-01 15:54:02.720726] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.738 [2024-10-01 15:54:02.720762] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.738 [2024-10-01 15:54:02.720769] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.738 [2024-10-01 15:54:02.720775] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.738 [2024-10-01 15:54:02.720780] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.738 [2024-10-01 15:54:02.720806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.306 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.306 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:53.306 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:53.306 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:53.306 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.306 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.306 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ktOg8wkXvb 00:19:53.306 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ktOg8wkXvb 00:19:53.306 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:53.565 [2024-10-01 15:54:03.614024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.565 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:53.823 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:53.823 [2024-10-01 15:54:04.011018] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.823 [2024-10-01 15:54:04.011215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.081 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:54.081 malloc0 00:19:54.081 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.339 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ktOg8wkXvb 00:19:54.599 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:54.858 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ktOg8wkXvb 00:19:54.858 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:54.858 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:54.858 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:54.858 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ktOg8wkXvb 00:19:54.858 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:54.858 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2455744 00:19:54.858 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.859 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.859 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2455744 /var/tmp/bdevperf.sock 00:19:54.859 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2455744 ']' 00:19:54.859 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.859 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:54.859 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.859 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:54.859 15:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.859 [2024-10-01 15:54:04.883532] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:19:54.859 [2024-10-01 15:54:04.883585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2455744 ] 00:19:54.859 [2024-10-01 15:54:04.951532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.859 [2024-10-01 15:54:05.026302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.794 15:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:55.794 15:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:55.794 15:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ktOg8wkXvb 00:19:55.794 15:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:56.052 [2024-10-01 15:54:06.061434] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.052 TLSTESTn1 00:19:56.052 15:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:56.311 Running I/O for 10 seconds... 00:20:06.127 5608.00 IOPS, 21.91 MiB/s 5665.00 IOPS, 22.13 MiB/s 5677.67 IOPS, 22.18 MiB/s 5645.75 IOPS, 22.05 MiB/s 5640.20 IOPS, 22.03 MiB/s 5622.00 IOPS, 21.96 MiB/s 5619.43 IOPS, 21.95 MiB/s 5623.00 IOPS, 21.96 MiB/s 5626.78 IOPS, 21.98 MiB/s 5613.40 IOPS, 21.93 MiB/s 00:20:06.127 Latency(us) 00:20:06.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.127 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:06.127 Verification LBA range: start 0x0 length 0x2000 00:20:06.127 TLSTESTn1 : 10.02 5615.47 21.94 0.00 0.00 22756.76 4681.14 20846.69 00:20:06.127 =================================================================================================================== 00:20:06.127 Total : 5615.47 21.94 0.00 0.00 22756.76 4681.14 20846.69 00:20:06.127 { 00:20:06.127 "results": [ 00:20:06.127 { 00:20:06.127 "job": "TLSTESTn1", 00:20:06.127 "core_mask": "0x4", 00:20:06.127 "workload": "verify", 00:20:06.127 "status": "finished", 00:20:06.127 "verify_range": { 00:20:06.127 "start": 0, 00:20:06.127 "length": 8192 00:20:06.127 }, 00:20:06.127 "queue_depth": 128, 00:20:06.127 "io_size": 4096, 00:20:06.127 "runtime": 10.01893, 00:20:06.127 "iops": 5615.469915450053, 00:20:06.127 "mibps": 21.93542935722677, 00:20:06.127 "io_failed": 0, 00:20:06.127 "io_timeout": 0, 00:20:06.127 "avg_latency_us": 22756.75877851612, 00:20:06.127 "min_latency_us": 4681.142857142857, 00:20:06.127 "max_latency_us": 20846.689523809524 00:20:06.127 } 00:20:06.127 ], 00:20:06.127 "core_count": 1 00:20:06.127 } 00:20:06.127 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:06.127 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2455744 00:20:06.127 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2455744 ']' 00:20:06.127 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2455744 00:20:06.127 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2455744 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2455744' 00:20:06.387 killing process with pid 2455744 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2455744 00:20:06.387 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.387 00:20:06.387 Latency(us) 00:20:06.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.387 =================================================================================================================== 00:20:06.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2455744 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ktOg8wkXvb 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ktOg8wkXvb 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ktOg8wkXvb 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ktOg8wkXvb 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ktOg8wkXvb 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2458048 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2458048 /var/tmp/bdevperf.sock 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2458048 ']' 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.387 15:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.647 [2024-10-01 15:54:16.610633] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:06.647 [2024-10-01 15:54:16.610683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2458048 ] 00:20:06.647 [2024-10-01 15:54:16.678709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.647 [2024-10-01 15:54:16.745164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.584 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:07.584 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:07.584 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ktOg8wkXvb 00:20:07.584 [2024-10-01 15:54:17.631665] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ktOg8wkXvb': 0100666 00:20:07.584 [2024-10-01 15:54:17.631697] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:07.584 request: 00:20:07.584 { 00:20:07.584 "name": "key0", 00:20:07.584 "path": "/tmp/tmp.ktOg8wkXvb", 00:20:07.584 "method": "keyring_file_add_key", 00:20:07.584 "req_id": 1 00:20:07.584 } 00:20:07.584 Got JSON-RPC error response 00:20:07.584 response: 00:20:07.584 { 00:20:07.584 "code": -1, 00:20:07.584 "message": "Operation not permitted" 00:20:07.584 } 00:20:07.584 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:07.843 [2024-10-01 15:54:17.840289] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.843 [2024-10-01 15:54:17.840320] bdev_nvme.c:6389:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:07.843 request: 00:20:07.843 { 00:20:07.843 "name": "TLSTEST", 00:20:07.843 "trtype": "tcp", 00:20:07.843 "traddr": "10.0.0.2", 00:20:07.843 "adrfam": "ipv4", 00:20:07.843 "trsvcid": "4420", 00:20:07.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.843 "prchk_reftag": false, 00:20:07.843 "prchk_guard": false, 00:20:07.843 "hdgst": false, 00:20:07.843 "ddgst": false, 00:20:07.843 "psk": "key0", 00:20:07.843 "allow_unrecognized_csi": false, 00:20:07.843 "method": "bdev_nvme_attach_controller", 00:20:07.843 "req_id": 1 00:20:07.843 } 00:20:07.843 Got JSON-RPC error response 00:20:07.843 response: 00:20:07.843 { 00:20:07.843 "code": -126, 00:20:07.843 "message": "Required key not available" 00:20:07.843 } 00:20:07.844 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2458048 00:20:07.844 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2458048 ']' 00:20:07.844 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2458048 00:20:07.844 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:07.844 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:07.844 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2458048 00:20:07.844 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:07.844 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:07.844 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2458048' 00:20:07.844 killing process with pid 2458048 00:20:07.844 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2458048 00:20:07.844 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.844 00:20:07.844 Latency(us) 00:20:07.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.844 =================================================================================================================== 00:20:07.844 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:07.844 15:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2458048 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2455345 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2455345 ']' 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2455345 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2455345 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2455345' 00:20:08.161 killing process with pid 2455345 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2455345 00:20:08.161 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2455345 00:20:08.466 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:08.466 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:08.466 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:08.466 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.466 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2458301 00:20:08.466 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:08.466 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2458301 00:20:08.466 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2458301 ']' 00:20:08.466 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.466 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:08.466 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.466 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:08.466 15:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.466 [2024-10-01 15:54:18.391623] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:08.466 [2024-10-01 15:54:18.391669] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.466 [2024-10-01 15:54:18.459503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.466 [2024-10-01 15:54:18.536487] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.466 [2024-10-01 15:54:18.536521] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.466 [2024-10-01 15:54:18.536528] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.467 [2024-10-01 15:54:18.536535] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.467 [2024-10-01 15:54:18.536540] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.467 [2024-10-01 15:54:18.536558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.074 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:09.074 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:09.074 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:09.074 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:09.074 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.074 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.074 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ktOg8wkXvb 00:20:09.074 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:09.074 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ktOg8wkXvb 00:20:09.074 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:09.074 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:09.074 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:09.332 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:09.332 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.ktOg8wkXvb 00:20:09.332 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ktOg8wkXvb 00:20:09.332 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:09.332 [2024-10-01 15:54:19.427593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.332 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:09.590 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:09.849 [2024-10-01 15:54:19.808571] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.849 [2024-10-01 15:54:19.808768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.849 15:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:09.849 malloc0 00:20:10.108 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:10.108 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ktOg8wkXvb 00:20:10.366 [2024-10-01 15:54:20.413779] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ktOg8wkXvb': 0100666 00:20:10.366 [2024-10-01 15:54:20.413807] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:10.366 request: 00:20:10.366 { 00:20:10.366 "name": "key0", 00:20:10.366 "path": "/tmp/tmp.ktOg8wkXvb", 00:20:10.366 "method": "keyring_file_add_key", 00:20:10.366 "req_id": 1 00:20:10.366 } 00:20:10.366 Got JSON-RPC error response 00:20:10.366 response: 00:20:10.366 { 00:20:10.366 "code": -1, 00:20:10.366 "message": "Operation not permitted" 00:20:10.366 } 00:20:10.367 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:10.625 [2024-10-01 15:54:20.610314] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:10.625 [2024-10-01 15:54:20.610349] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:10.625 request: 00:20:10.625 { 00:20:10.625 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.625 "host": "nqn.2016-06.io.spdk:host1", 00:20:10.625 "psk": "key0", 00:20:10.625 "method": "nvmf_subsystem_add_host", 00:20:10.625 "req_id": 1 00:20:10.625 } 00:20:10.625 Got JSON-RPC error response 00:20:10.625 response: 00:20:10.625 { 00:20:10.625 "code": -32603, 00:20:10.625 "message": "Internal error" 00:20:10.625 } 00:20:10.625 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:10.625 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:10.625 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:10.625 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:10.625 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2458301 00:20:10.625 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2458301 ']' 00:20:10.625 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2458301 00:20:10.625 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:10.625 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:10.626 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2458301 00:20:10.626 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:10.626 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:10.626 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2458301' 00:20:10.626 killing process with pid 2458301 00:20:10.626 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2458301 00:20:10.626 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2458301 00:20:10.884 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ktOg8wkXvb 00:20:10.884 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:10.884 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:10.884 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:10.884 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.884 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2458791 00:20:10.884 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:10.884 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2458791 00:20:10.884 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2458791 ']' 00:20:10.884 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.884 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:10.884 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.884 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:10.884 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.884 [2024-10-01 15:54:20.940444] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:10.884 [2024-10-01 15:54:20.940488] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.884 [2024-10-01 15:54:21.013077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.143 [2024-10-01 15:54:21.084688] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.143 [2024-10-01 15:54:21.084727] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.143 [2024-10-01 15:54:21.084734] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.143 [2024-10-01 15:54:21.084740] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.143 [2024-10-01 15:54:21.084745] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.143 [2024-10-01 15:54:21.084764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.709 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:11.709 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:11.709 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:11.709 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:11.709 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.709 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.709 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ktOg8wkXvb 00:20:11.709 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ktOg8wkXvb 00:20:11.709 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:11.967 [2024-10-01 15:54:21.980839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.967 15:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:12.226 15:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:12.226 [2024-10-01 15:54:22.369834] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.226 [2024-10-01 15:54:22.370038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.226 15:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:12.485 malloc0 00:20:12.485 15:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:12.745 15:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ktOg8wkXvb 00:20:13.004 15:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:13.004 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2459149 00:20:13.004 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:13.004 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:13.004 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2459149 /var/tmp/bdevperf.sock 00:20:13.004 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2459149 ']' 00:20:13.004 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.004 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:13.004 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.004 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:13.004 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.262 [2024-10-01 15:54:23.222538] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:13.262 [2024-10-01 15:54:23.222588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2459149 ] 00:20:13.262 [2024-10-01 15:54:23.292299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.262 [2024-10-01 15:54:23.366518] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.197 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:14.197 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:14.197 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ktOg8wkXvb 00:20:14.197 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:14.456 [2024-10-01 15:54:24.433774] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.456 TLSTESTn1 00:20:14.456 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:14.715 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:14.715 "subsystems": [ 00:20:14.715 { 00:20:14.715 "subsystem": "keyring", 00:20:14.715 "config": [ 00:20:14.715 { 00:20:14.715 "method": "keyring_file_add_key", 00:20:14.715 "params": { 00:20:14.715 "name": "key0", 00:20:14.715 "path": "/tmp/tmp.ktOg8wkXvb" 00:20:14.715 } 00:20:14.715 } 00:20:14.715 ] 00:20:14.715 }, 00:20:14.715 { 00:20:14.715 "subsystem": "iobuf", 00:20:14.715 "config": [ 00:20:14.715 { 00:20:14.715 "method": "iobuf_set_options", 00:20:14.715 "params": { 00:20:14.715 "small_pool_count": 8192, 00:20:14.715 "large_pool_count": 1024, 00:20:14.715 "small_bufsize": 8192, 00:20:14.715 "large_bufsize": 135168 00:20:14.715 } 00:20:14.715 } 00:20:14.715 ] 00:20:14.715 }, 00:20:14.715 { 00:20:14.715 "subsystem": "sock", 00:20:14.715 "config": [ 00:20:14.715 { 00:20:14.715 "method": "sock_set_default_impl", 00:20:14.715 "params": { 00:20:14.715 "impl_name": "posix" 00:20:14.715 } 00:20:14.715 }, 00:20:14.715 { 00:20:14.715 "method": "sock_impl_set_options", 00:20:14.715 "params": { 00:20:14.715 "impl_name": "ssl", 00:20:14.715 "recv_buf_size": 4096, 00:20:14.715 "send_buf_size": 4096, 00:20:14.715 "enable_recv_pipe": true, 00:20:14.715 "enable_quickack": false, 00:20:14.715 "enable_placement_id": 0, 00:20:14.715 "enable_zerocopy_send_server": true, 00:20:14.716 "enable_zerocopy_send_client": false, 00:20:14.716 "zerocopy_threshold": 0, 00:20:14.716 "tls_version": 0, 00:20:14.716 "enable_ktls": false 00:20:14.716 } 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "method": "sock_impl_set_options", 00:20:14.716 "params": { 00:20:14.716 "impl_name": "posix", 00:20:14.716 "recv_buf_size": 2097152, 00:20:14.716 "send_buf_size": 2097152, 00:20:14.716 "enable_recv_pipe": true, 00:20:14.716 "enable_quickack": false, 00:20:14.716 "enable_placement_id": 0, 00:20:14.716 "enable_zerocopy_send_server": true, 00:20:14.716 "enable_zerocopy_send_client": false, 00:20:14.716 "zerocopy_threshold": 0, 00:20:14.716 "tls_version": 0, 00:20:14.716 "enable_ktls": false 00:20:14.716 } 00:20:14.716 } 00:20:14.716 ] 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "subsystem": "vmd", 00:20:14.716 "config": [] 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "subsystem": "accel", 00:20:14.716 "config": [ 00:20:14.716 { 00:20:14.716 "method": "accel_set_options", 00:20:14.716 "params": { 00:20:14.716 "small_cache_size": 128, 00:20:14.716 "large_cache_size": 16, 00:20:14.716 "task_count": 2048, 00:20:14.716 "sequence_count": 2048, 00:20:14.716 "buf_count": 2048 00:20:14.716 } 00:20:14.716 } 00:20:14.716 ] 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "subsystem": "bdev", 00:20:14.716 "config": [ 00:20:14.716 { 00:20:14.716 "method": "bdev_set_options", 00:20:14.716 "params": { 00:20:14.716 "bdev_io_pool_size": 65535, 00:20:14.716 "bdev_io_cache_size": 256, 00:20:14.716 "bdev_auto_examine": true, 00:20:14.716 "iobuf_small_cache_size": 128, 00:20:14.716 "iobuf_large_cache_size": 16 00:20:14.716 } 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "method": "bdev_raid_set_options", 00:20:14.716 "params": { 00:20:14.716 "process_window_size_kb": 1024, 00:20:14.716 "process_max_bandwidth_mb_sec": 0 00:20:14.716 } 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "method": "bdev_iscsi_set_options", 00:20:14.716 "params": { 00:20:14.716 "timeout_sec": 30 00:20:14.716 } 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "method": "bdev_nvme_set_options", 00:20:14.716 "params": { 00:20:14.716 "action_on_timeout": "none", 00:20:14.716 "timeout_us": 0, 00:20:14.716 "timeout_admin_us": 0, 00:20:14.716 "keep_alive_timeout_ms": 10000, 00:20:14.716 "arbitration_burst": 0, 00:20:14.716 "low_priority_weight": 0, 00:20:14.716 "medium_priority_weight": 0, 00:20:14.716 "high_priority_weight": 0, 00:20:14.716 "nvme_adminq_poll_period_us": 10000, 00:20:14.716 "nvme_ioq_poll_period_us": 0, 00:20:14.716 "io_queue_requests": 0, 00:20:14.716 "delay_cmd_submit": true, 00:20:14.716 "transport_retry_count": 4, 00:20:14.716 "bdev_retry_count": 3, 00:20:14.716 "transport_ack_timeout": 0, 00:20:14.716 "ctrlr_loss_timeout_sec": 0, 00:20:14.716 "reconnect_delay_sec": 0, 00:20:14.716 "fast_io_fail_timeout_sec": 0, 00:20:14.716 "disable_auto_failback": false, 00:20:14.716 "generate_uuids": false, 00:20:14.716 "transport_tos": 0, 00:20:14.716 "nvme_error_stat": false, 00:20:14.716 "rdma_srq_size": 0, 00:20:14.716 "io_path_stat": false, 00:20:14.716 "allow_accel_sequence": false, 00:20:14.716 "rdma_max_cq_size": 0, 00:20:14.716 "rdma_cm_event_timeout_ms": 0, 00:20:14.716 "dhchap_digests": [ 00:20:14.716 "sha256", 00:20:14.716 "sha384", 00:20:14.716 "sha512" 00:20:14.716 ], 00:20:14.716 "dhchap_dhgroups": [ 00:20:14.716 "null", 00:20:14.716 "ffdhe2048", 00:20:14.716 "ffdhe3072", 00:20:14.716 "ffdhe4096", 00:20:14.716 "ffdhe6144", 00:20:14.716 "ffdhe8192" 00:20:14.716 ] 00:20:14.716 } 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "method": "bdev_nvme_set_hotplug", 00:20:14.716 "params": { 00:20:14.716 "period_us": 100000, 00:20:14.716 "enable": false 00:20:14.716 } 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "method": "bdev_malloc_create", 00:20:14.716 "params": { 00:20:14.716 "name": "malloc0", 00:20:14.716 "num_blocks": 8192, 00:20:14.716 "block_size": 4096, 00:20:14.716 "physical_block_size": 4096, 00:20:14.716 "uuid": "27f05f1c-5f5b-4a0e-8426-4f4a7325cb04", 00:20:14.716 "optimal_io_boundary": 0, 00:20:14.716 "md_size": 0, 00:20:14.716 "dif_type": 0, 00:20:14.716 "dif_is_head_of_md": false, 00:20:14.716 "dif_pi_format": 0 00:20:14.716 } 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "method": "bdev_wait_for_examine" 00:20:14.716 } 00:20:14.716 ] 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "subsystem": "nbd", 00:20:14.716 "config": [] 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "subsystem": "scheduler", 00:20:14.716 "config": [ 00:20:14.716 { 00:20:14.716 "method": "framework_set_scheduler", 00:20:14.716 "params": { 00:20:14.716 "name": "static" 00:20:14.716 } 00:20:14.716 } 00:20:14.716 ] 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "subsystem": "nvmf", 00:20:14.716 "config": [ 00:20:14.716 { 00:20:14.716 "method": "nvmf_set_config", 00:20:14.716 "params": { 00:20:14.716 "discovery_filter": "match_any", 00:20:14.716 "admin_cmd_passthru": { 00:20:14.716 "identify_ctrlr": false 00:20:14.716 }, 00:20:14.716 "dhchap_digests": [ 00:20:14.716 "sha256", 00:20:14.716 "sha384", 00:20:14.716 "sha512" 00:20:14.716 ], 00:20:14.716 "dhchap_dhgroups": [ 00:20:14.716 "null", 00:20:14.716 "ffdhe2048", 00:20:14.716 "ffdhe3072", 00:20:14.716 "ffdhe4096", 00:20:14.716 "ffdhe6144", 00:20:14.716 "ffdhe8192" 00:20:14.716 ] 00:20:14.716 } 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "method": "nvmf_set_max_subsystems", 00:20:14.716 "params": { 00:20:14.716 "max_subsystems": 1024 00:20:14.716 } 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "method": "nvmf_set_crdt", 00:20:14.716 "params": { 00:20:14.716 "crdt1": 0, 00:20:14.716 "crdt2": 0, 00:20:14.716 "crdt3": 0 00:20:14.716 } 00:20:14.716 }, 00:20:14.716 { 00:20:14.716 "method": "nvmf_create_transport", 00:20:14.717 "params": { 00:20:14.717 "trtype": "TCP", 00:20:14.717 "max_queue_depth": 128, 00:20:14.717 "max_io_qpairs_per_ctrlr": 127, 00:20:14.717 "in_capsule_data_size": 4096, 00:20:14.717 "max_io_size": 131072, 00:20:14.717 "io_unit_size": 131072, 00:20:14.717 "max_aq_depth": 128, 00:20:14.717 "num_shared_buffers": 511, 00:20:14.717 "buf_cache_size": 4294967295, 00:20:14.717 "dif_insert_or_strip": false, 00:20:14.717 "zcopy": false, 00:20:14.717 "c2h_success": false, 00:20:14.717 "sock_priority": 0, 00:20:14.717 "abort_timeout_sec": 1, 00:20:14.717 "ack_timeout": 0, 00:20:14.717 "data_wr_pool_size": 0 00:20:14.717 } 00:20:14.717 }, 00:20:14.717 { 00:20:14.717 "method": "nvmf_create_subsystem", 00:20:14.717 "params": { 00:20:14.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.717 "allow_any_host": false, 00:20:14.717 "serial_number": "SPDK00000000000001", 00:20:14.717 "model_number": "SPDK bdev Controller", 00:20:14.717 "max_namespaces": 10, 00:20:14.717 "min_cntlid": 1, 00:20:14.717 "max_cntlid": 65519, 00:20:14.717 "ana_reporting": false 00:20:14.717 } 00:20:14.717 }, 00:20:14.717 { 00:20:14.717 "method": "nvmf_subsystem_add_host", 00:20:14.717 "params": { 00:20:14.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.717 "host": "nqn.2016-06.io.spdk:host1", 00:20:14.717 "psk": "key0" 00:20:14.717 } 00:20:14.717 }, 00:20:14.717 { 00:20:14.717 "method": "nvmf_subsystem_add_ns", 00:20:14.717 "params": { 00:20:14.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.717 "namespace": { 00:20:14.717 "nsid": 1, 00:20:14.717 "bdev_name": "malloc0", 00:20:14.717 "nguid": "27F05F1C5F5B4A0E84264F4A7325CB04", 00:20:14.717 "uuid": "27f05f1c-5f5b-4a0e-8426-4f4a7325cb04", 00:20:14.717 "no_auto_visible": false 00:20:14.717 } 00:20:14.717 } 00:20:14.717 }, 00:20:14.717 { 00:20:14.717 "method": "nvmf_subsystem_add_listener", 00:20:14.717 "params": { 00:20:14.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.717 "listen_address": { 00:20:14.717 "trtype": "TCP", 00:20:14.717 "adrfam": "IPv4", 00:20:14.717 "traddr": "10.0.0.2", 00:20:14.717 "trsvcid": "4420" 00:20:14.717 }, 00:20:14.717 "secure_channel": true 00:20:14.717 } 00:20:14.717 } 00:20:14.717 ] 00:20:14.717 } 00:20:14.717 ] 00:20:14.717 }' 00:20:14.717 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:14.976 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:14.976 "subsystems": [ 00:20:14.976 { 00:20:14.976 "subsystem": "keyring", 00:20:14.976 "config": [ 00:20:14.976 { 00:20:14.976 "method": "keyring_file_add_key", 00:20:14.976 "params": { 00:20:14.976 "name": "key0", 00:20:14.976 "path": "/tmp/tmp.ktOg8wkXvb" 00:20:14.976 } 00:20:14.976 } 00:20:14.976 ] 00:20:14.976 }, 00:20:14.976 { 00:20:14.976 "subsystem": "iobuf", 00:20:14.976 "config": [ 00:20:14.976 { 00:20:14.976 "method": "iobuf_set_options", 00:20:14.976 "params": { 00:20:14.976 "small_pool_count": 8192, 00:20:14.976 "large_pool_count": 1024, 00:20:14.976 "small_bufsize": 8192, 00:20:14.976 "large_bufsize": 135168 00:20:14.976 } 00:20:14.976 } 00:20:14.976 ] 00:20:14.976 }, 00:20:14.976 { 00:20:14.976 "subsystem": "sock", 00:20:14.976 "config": [ 00:20:14.976 { 00:20:14.976 "method": "sock_set_default_impl", 00:20:14.976 "params": { 00:20:14.976 "impl_name": "posix" 00:20:14.976 } 00:20:14.976 }, 00:20:14.976 { 00:20:14.976 "method": "sock_impl_set_options", 00:20:14.976 "params": { 00:20:14.976 "impl_name": "ssl", 00:20:14.976 "recv_buf_size": 4096, 00:20:14.976 "send_buf_size": 4096, 00:20:14.976 "enable_recv_pipe": true, 00:20:14.976 "enable_quickack": false, 00:20:14.976 "enable_placement_id": 0, 00:20:14.976 "enable_zerocopy_send_server": true, 00:20:14.976 "enable_zerocopy_send_client": false, 00:20:14.976 "zerocopy_threshold": 0, 00:20:14.976 "tls_version": 0, 00:20:14.976 "enable_ktls": false 00:20:14.976 } 00:20:14.976 }, 00:20:14.976 { 00:20:14.976 "method": "sock_impl_set_options", 00:20:14.976 "params": { 00:20:14.976 "impl_name": "posix", 00:20:14.976 "recv_buf_size": 2097152, 00:20:14.976 "send_buf_size": 2097152, 00:20:14.976 "enable_recv_pipe": true, 00:20:14.976 "enable_quickack": false, 00:20:14.976 "enable_placement_id": 0, 00:20:14.976 "enable_zerocopy_send_server": true, 00:20:14.976 "enable_zerocopy_send_client": false, 00:20:14.976 "zerocopy_threshold": 0, 00:20:14.976 "tls_version": 0, 00:20:14.976 "enable_ktls": false 00:20:14.976 } 00:20:14.976 } 00:20:14.976 ] 00:20:14.976 }, 00:20:14.976 { 00:20:14.976 "subsystem": "vmd", 00:20:14.977 "config": [] 00:20:14.977 }, 00:20:14.977 { 00:20:14.977 "subsystem": "accel", 00:20:14.977 "config": [ 00:20:14.977 { 00:20:14.977 "method": "accel_set_options", 00:20:14.977 "params": { 00:20:14.977 "small_cache_size": 128, 00:20:14.977 "large_cache_size": 16, 00:20:14.977 "task_count": 2048, 00:20:14.977 "sequence_count": 2048, 00:20:14.977 "buf_count": 2048 00:20:14.977 } 00:20:14.977 } 00:20:14.977 ] 00:20:14.977 }, 00:20:14.977 { 00:20:14.977 "subsystem": "bdev", 00:20:14.977 "config": [ 00:20:14.977 { 00:20:14.977 "method": "bdev_set_options", 00:20:14.977 "params": { 00:20:14.977 "bdev_io_pool_size": 65535, 00:20:14.977 "bdev_io_cache_size": 256, 00:20:14.977 "bdev_auto_examine": true, 00:20:14.977 "iobuf_small_cache_size": 128, 00:20:14.977 "iobuf_large_cache_size": 16 00:20:14.977 } 00:20:14.977 }, 00:20:14.977 { 00:20:14.977 "method": "bdev_raid_set_options", 00:20:14.977 "params": { 00:20:14.977 "process_window_size_kb": 1024, 00:20:14.977 "process_max_bandwidth_mb_sec": 0 00:20:14.977 } 00:20:14.977 }, 00:20:14.977 { 00:20:14.977 "method": "bdev_iscsi_set_options", 00:20:14.977 "params": { 00:20:14.977 "timeout_sec": 30 00:20:14.977 } 00:20:14.977 }, 00:20:14.977 { 00:20:14.977 "method": "bdev_nvme_set_options", 00:20:14.977 "params": { 00:20:14.977 "action_on_timeout": "none", 00:20:14.977 "timeout_us": 0, 00:20:14.977 "timeout_admin_us": 0, 00:20:14.977 "keep_alive_timeout_ms": 10000, 00:20:14.977 "arbitration_burst": 0, 00:20:14.977 "low_priority_weight": 0, 00:20:14.977 "medium_priority_weight": 0, 00:20:14.977 "high_priority_weight": 0, 00:20:14.977 "nvme_adminq_poll_period_us": 10000, 00:20:14.977 "nvme_ioq_poll_period_us": 0, 00:20:14.977 "io_queue_requests": 512, 00:20:14.977 "delay_cmd_submit": true, 00:20:14.977 "transport_retry_count": 4, 00:20:14.977 "bdev_retry_count": 3, 00:20:14.977 "transport_ack_timeout": 0, 00:20:14.977 "ctrlr_loss_timeout_sec": 0, 00:20:14.977 "reconnect_delay_sec": 0, 00:20:14.977 "fast_io_fail_timeout_sec": 0, 00:20:14.977 "disable_auto_failback": false, 00:20:14.977 "generate_uuids": false, 00:20:14.977 "transport_tos": 0, 00:20:14.977 "nvme_error_stat": false, 00:20:14.977 "rdma_srq_size": 0, 00:20:14.977 "io_path_stat": false, 00:20:14.977 "allow_accel_sequence": false, 00:20:14.977 "rdma_max_cq_size": 0, 00:20:14.977 "rdma_cm_event_timeout_ms": 0, 00:20:14.977 "dhchap_digests": [ 00:20:14.977 "sha256", 00:20:14.977 "sha384", 00:20:14.977 "sha512" 00:20:14.977 ], 00:20:14.977 "dhchap_dhgroups": [ 00:20:14.977 "null", 00:20:14.977 "ffdhe2048", 00:20:14.977 "ffdhe3072", 00:20:14.977 "ffdhe4096", 00:20:14.977 "ffdhe6144", 00:20:14.977 "ffdhe8192" 00:20:14.977 ] 00:20:14.977 } 00:20:14.977 }, 00:20:14.977 { 00:20:14.977 "method": "bdev_nvme_attach_controller", 00:20:14.977 "params": { 00:20:14.977 "name": "TLSTEST", 00:20:14.977 "trtype": "TCP", 00:20:14.977 "adrfam": "IPv4", 00:20:14.977 "traddr": "10.0.0.2", 00:20:14.977 "trsvcid": "4420", 00:20:14.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.977 "prchk_reftag": false, 00:20:14.977 "prchk_guard": false, 00:20:14.977 "ctrlr_loss_timeout_sec": 0, 00:20:14.977 "reconnect_delay_sec": 0, 00:20:14.977 "fast_io_fail_timeout_sec": 0, 00:20:14.977 "psk": "key0", 00:20:14.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.977 "hdgst": false, 00:20:14.977 "ddgst": false, 00:20:14.977 "multipath": "multipath" 00:20:14.977 } 00:20:14.977 }, 00:20:14.977 { 00:20:14.977 "method": "bdev_nvme_set_hotplug", 00:20:14.977 "params": { 00:20:14.977 "period_us": 100000, 00:20:14.977 "enable": false 00:20:14.977 } 00:20:14.977 }, 00:20:14.977 { 00:20:14.977 "method": "bdev_wait_for_examine" 00:20:14.977 } 00:20:14.977 ] 00:20:14.977 }, 00:20:14.977 { 00:20:14.977 "subsystem": "nbd", 00:20:14.977 "config": [] 00:20:14.977 } 00:20:14.977 ] 00:20:14.977 }' 00:20:14.977 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2459149 00:20:14.977 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2459149 ']' 00:20:14.977 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2459149 00:20:14.977 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:14.977 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:14.977 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2459149 00:20:14.977 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:14.977 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:14.977 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2459149' 00:20:14.977 killing process with pid 2459149 00:20:14.977 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2459149 00:20:14.977 Received shutdown signal, test time was about 10.000000 seconds 00:20:14.977 00:20:14.977 Latency(us) 00:20:14.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.977 =================================================================================================================== 00:20:14.977 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:14.977 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2459149 00:20:15.236 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2458791 00:20:15.236 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2458791 ']' 00:20:15.236 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2458791 00:20:15.236 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:15.236 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:15.236 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2458791 00:20:15.236 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:15.236 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:15.236 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2458791' 00:20:15.236 killing process with pid 2458791 00:20:15.236 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2458791 00:20:15.236 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2458791 00:20:15.496 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:15.496 "subsystems": [ 00:20:15.496 { 00:20:15.496 "subsystem": "keyring", 00:20:15.496 "config": [ 00:20:15.496 { 00:20:15.496 "method": "keyring_file_add_key", 00:20:15.496 "params": { 00:20:15.496 "name": "key0", 00:20:15.496 "path": "/tmp/tmp.ktOg8wkXvb" 00:20:15.496 } 00:20:15.496 } 00:20:15.496 ] 00:20:15.496 }, 00:20:15.496 { 00:20:15.496 "subsystem": "iobuf", 00:20:15.496 "config": [ 00:20:15.496 { 00:20:15.496 "method": "iobuf_set_options", 00:20:15.496 "params": { 00:20:15.496 "small_pool_count": 8192, 00:20:15.496 "large_pool_count": 1024, 00:20:15.496 "small_bufsize": 8192, 00:20:15.496 "large_bufsize": 135168 00:20:15.496 } 00:20:15.496 } 00:20:15.496 ] 00:20:15.496 }, 00:20:15.496 { 00:20:15.496 "subsystem": "sock", 00:20:15.496 "config": [ 00:20:15.496 { 00:20:15.496 "method": "sock_set_default_impl", 00:20:15.496 "params": { 00:20:15.496 "impl_name": "posix" 00:20:15.496 } 00:20:15.496 }, 00:20:15.496 { 00:20:15.496 "method": "sock_impl_set_options", 00:20:15.496 "params": { 00:20:15.496 "impl_name": "ssl", 00:20:15.496 "recv_buf_size": 4096, 00:20:15.496 "send_buf_size": 4096, 00:20:15.496 "enable_recv_pipe": true, 00:20:15.496 "enable_quickack": false, 00:20:15.496 "enable_placement_id": 0, 00:20:15.496 "enable_zerocopy_send_server": true, 00:20:15.496 "enable_zerocopy_send_client": false, 00:20:15.496 "zerocopy_threshold": 0, 00:20:15.496 "tls_version": 0, 00:20:15.496 "enable_ktls": false 00:20:15.496 } 00:20:15.496 }, 00:20:15.496 { 00:20:15.496 "method": "sock_impl_set_options", 00:20:15.496 "params": { 00:20:15.496 "impl_name": "posix", 00:20:15.496 "recv_buf_size": 2097152, 00:20:15.496 "send_buf_size": 2097152, 00:20:15.496 "enable_recv_pipe": true, 00:20:15.496 "enable_quickack": false, 00:20:15.496 "enable_placement_id": 0, 00:20:15.496 "enable_zerocopy_send_server": true, 00:20:15.496 "enable_zerocopy_send_client": false, 00:20:15.496 "zerocopy_threshold": 0, 00:20:15.496 "tls_version": 0, 00:20:15.496 "enable_ktls": false 00:20:15.496 } 00:20:15.496 } 00:20:15.496 ] 00:20:15.496 }, 00:20:15.496 { 00:20:15.496 "subsystem": "vmd", 00:20:15.496 "config": [] 00:20:15.496 }, 00:20:15.496 { 00:20:15.496 "subsystem": "accel", 00:20:15.496 "config": [ 00:20:15.496 { 00:20:15.496 "method": "accel_set_options", 00:20:15.496 "params": { 00:20:15.496 "small_cache_size": 128, 00:20:15.496 "large_cache_size": 16, 00:20:15.496 "task_count": 2048, 00:20:15.496 "sequence_count": 2048, 00:20:15.496 "buf_count": 2048 00:20:15.496 } 00:20:15.496 } 00:20:15.496 ] 00:20:15.496 }, 00:20:15.496 { 00:20:15.496 "subsystem": "bdev", 00:20:15.496 "config": [ 00:20:15.496 { 00:20:15.496 "method": "bdev_set_options", 00:20:15.496 "params": { 00:20:15.496 "bdev_io_pool_size": 65535, 00:20:15.496 "bdev_io_cache_size": 256, 00:20:15.496 "bdev_auto_examine": true, 00:20:15.496 "iobuf_small_cache_size": 128, 00:20:15.496 "iobuf_large_cache_size": 16 00:20:15.496 } 00:20:15.496 }, 00:20:15.496 { 00:20:15.496 "method": "bdev_raid_set_options", 00:20:15.496 "params": { 00:20:15.496 "process_window_size_kb": 1024, 00:20:15.496 "process_max_bandwidth_mb_sec": 0 00:20:15.496 } 00:20:15.496 }, 00:20:15.496 { 00:20:15.496 "method": "bdev_iscsi_set_options", 00:20:15.496 "params": { 00:20:15.496 "timeout_sec": 30 00:20:15.496 } 00:20:15.496 }, 00:20:15.496 { 00:20:15.496 "method": "bdev_nvme_set_options", 00:20:15.496 "params": { 00:20:15.496 "action_on_timeout": "none", 00:20:15.496 "timeout_us": 0, 00:20:15.496 "timeout_admin_us": 0, 00:20:15.496 "keep_alive_timeout_ms": 10000, 00:20:15.496 "arbitration_burst": 0, 00:20:15.496 "low_priority_weight": 0, 00:20:15.496 "medium_priority_weight": 0, 00:20:15.496 "high_priority_weight": 0, 00:20:15.496 "nvme_adminq_poll_period_us": 10000, 00:20:15.496 "nvme_ioq_poll_period_us": 0, 00:20:15.496 "io_queue_requests": 0, 00:20:15.496 "delay_cmd_submit": true, 00:20:15.496 "transport_retry_count": 4, 00:20:15.496 "bdev_retry_count": 3, 00:20:15.496 "transport_ack_timeout": 0, 00:20:15.496 "ctrlr_loss_timeout_sec": 0, 00:20:15.496 "reconnect_delay_sec": 0, 00:20:15.496 "fast_io_fail_timeout_sec": 0, 00:20:15.496 "disable_auto_failback": false, 00:20:15.496 "generate_uuids": false, 00:20:15.496 "transport_tos": 0, 00:20:15.496 "nvme_error_stat": false, 00:20:15.496 "rdma_srq_size": 0, 00:20:15.496 "io_path_stat": false, 00:20:15.496 "allow_accel_sequence": false, 00:20:15.496 "rdma_max_cq_size": 0, 00:20:15.496 "rdma_cm_event_timeout_ms": 0, 00:20:15.496 "dhchap_digests": [ 00:20:15.496 "sha256", 00:20:15.496 "sha384", 00:20:15.496 "sha512" 00:20:15.496 ], 00:20:15.496 "dhchap_dhgroups": [ 00:20:15.496 "null", 00:20:15.496 "ffdhe2048", 00:20:15.497 "ffdhe3072", 00:20:15.497 "ffdhe4096", 00:20:15.497 "ffdhe6144", 00:20:15.497 "ffdhe8192" 00:20:15.497 ] 00:20:15.497 } 00:20:15.497 }, 00:20:15.497 { 00:20:15.497 "method": "bdev_nvme_set_hotplug", 00:20:15.497 "params": { 00:20:15.497 "period_us": 100000, 00:20:15.497 "enable": false 00:20:15.497 } 00:20:15.497 }, 00:20:15.497 { 00:20:15.497 "method": "bdev_malloc_create", 00:20:15.497 "params": { 00:20:15.497 "name": "malloc0", 00:20:15.497 "num_blocks": 8192, 00:20:15.497 "block_size": 4096, 00:20:15.497 "physical_block_size": 4096, 00:20:15.497 "uuid": "27f05f1c-5f5b-4a0e-8426-4f4a7325cb04", 00:20:15.497 "optimal_io_boundary": 0, 00:20:15.497 "md_size": 0, 00:20:15.497 "dif_type": 0, 00:20:15.497 "dif_is_head_of_md": false, 00:20:15.497 "dif_pi_format": 0 00:20:15.497 } 00:20:15.497 }, 00:20:15.497 { 00:20:15.497 "method": "bdev_wait_for_examine" 00:20:15.497 } 00:20:15.497 ] 00:20:15.497 }, 00:20:15.497 { 00:20:15.497 "subsystem": "nbd", 00:20:15.497 "config": [] 00:20:15.497 }, 00:20:15.497 { 00:20:15.497 "subsystem": "scheduler", 00:20:15.497 "config": [ 00:20:15.497 { 00:20:15.497 "method": "framework_set_scheduler", 00:20:15.497 "params": { 00:20:15.497 "name": "static" 00:20:15.497 } 00:20:15.497 } 00:20:15.497 ] 00:20:15.497 }, 00:20:15.497 { 00:20:15.497 "subsystem": "nvmf", 00:20:15.497 "config": [ 00:20:15.497 { 00:20:15.497 "method": "nvmf_set_config", 00:20:15.497 "params": { 00:20:15.497 "discovery_filter": "match_any", 00:20:15.497 "admin_cmd_passthru": { 00:20:15.497 "identify_ctrlr": false 00:20:15.497 }, 00:20:15.497 "dhchap_digests": [ 00:20:15.497 "sha256", 00:20:15.497 "sha384", 00:20:15.497 "sha512" 00:20:15.497 ], 00:20:15.497 "dhchap_dhgroups": [ 00:20:15.497 "null", 00:20:15.497 "ffdhe2048", 00:20:15.497 "ffdhe3072", 00:20:15.497 "ffdhe4096", 00:20:15.497 "ffdhe6144", 00:20:15.497 "ffdhe8192" 00:20:15.497 ] 00:20:15.497 } 00:20:15.497 }, 00:20:15.497 { 00:20:15.497 "method": "nvmf_set_max_subsystems", 00:20:15.497 "params": { 00:20:15.497 "max_subsystems": 1024 00:20:15.497 } 00:20:15.497 }, 00:20:15.497 { 00:20:15.497 "method": "nvmf_set_crdt", 00:20:15.497 "params": { 00:20:15.497 "crdt1": 0, 00:20:15.497 "crdt2": 0, 00:20:15.497 "crdt3": 0 00:20:15.497 } 00:20:15.497 }, 00:20:15.497 { 00:20:15.497 "method": "nvmf_create_transport", 00:20:15.497 "params": { 00:20:15.497 "trtype": "TCP", 00:20:15.497 "max_queue_depth": 128, 00:20:15.497 "max_io_qpairs_per_ctrlr": 127, 00:20:15.497 "in_capsule_data_size": 4096, 00:20:15.497 "max_io_size": 131072, 00:20:15.497 "io_unit_size": 131072, 00:20:15.497 "max_aq_depth": 128, 00:20:15.497 "num_shared_buffers": 511, 00:20:15.497 "buf_cache_size": 4294967295, 00:20:15.497 "dif_insert_or_strip": false, 00:20:15.497 "zcopy": false, 00:20:15.497 "c2h_success": false, 00:20:15.497 "sock_priority": 0, 00:20:15.497 "abort_timeout_sec": 1, 00:20:15.497 "ack_timeout": 0, 00:20:15.497 "data_wr_pool_size": 0 00:20:15.497 } 00:20:15.497 }, 00:20:15.497 { 00:20:15.497 "method": "nvmf_create_subsystem", 00:20:15.497 "params": { 00:20:15.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.497 "allow_any_host": false, 00:20:15.497 "serial_number": "SPDK00000000000001", 00:20:15.497 "model_number": "SPDK bdev Controller", 00:20:15.497 "max_namespaces": 10, 00:20:15.497 "min_cntlid": 1, 00:20:15.497 "max_cntlid": 65519, 00:20:15.497 "ana_reporting": false 00:20:15.497 } 00:20:15.497 }, 00:20:15.497 { 00:20:15.497 "method": "nvmf_subsystem_add_host", 00:20:15.497 "params": { 00:20:15.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.497 "host": "nqn.2016-06.io.spdk:host1", 00:20:15.497 "psk": "key0" 00:20:15.497 } 00:20:15.497 }, 00:20:15.497 { 00:20:15.497 "method": "nvmf_subsystem_add_ns", 00:20:15.497 "params": { 00:20:15.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.497 "namespace": { 00:20:15.497 "nsid": 1, 00:20:15.497 "bdev_name": "malloc0", 00:20:15.497 "nguid": "27F05F1C5F5B4A0E84264F4A7325CB04", 00:20:15.497 "uuid": "27f05f1c-5f5b-4a0e-8426-4f4a7325cb04", 00:20:15.497 "no_auto_visible": false 00:20:15.497 } 00:20:15.497 } 00:20:15.497 }, 00:20:15.497 { 00:20:15.497 "method": "nvmf_subsystem_add_listener", 00:20:15.497 "params": { 00:20:15.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.497 "listen_address": { 00:20:15.497 "trtype": "TCP", 00:20:15.497 "adrfam": "IPv4", 00:20:15.497 "traddr": "10.0.0.2", 00:20:15.497 "trsvcid": "4420" 00:20:15.497 }, 00:20:15.497 "secure_channel": true 00:20:15.497 } 00:20:15.497 } 00:20:15.497 ] 00:20:15.497 } 00:20:15.497 ] 00:20:15.497 }' 00:20:15.497 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:15.497 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:15.497 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:15.497 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.497 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2459536 00:20:15.497 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2459536 00:20:15.497 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:15.497 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2459536 ']' 00:20:15.497 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.497 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:15.497 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.497 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:15.497 15:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.497 [2024-10-01 15:54:25.614732] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:15.497 [2024-10-01 15:54:25.614778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.774 [2024-10-01 15:54:25.686897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.774 [2024-10-01 15:54:25.764279] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.774 [2024-10-01 15:54:25.764313] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.774 [2024-10-01 15:54:25.764320] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.774 [2024-10-01 15:54:25.764327] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.774 [2024-10-01 15:54:25.764332] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.774 [2024-10-01 15:54:25.764380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.032 [2024-10-01 15:54:25.988158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.032 [2024-10-01 15:54:26.020175] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:16.032 [2024-10-01 15:54:26.020367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.291 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:16.291 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:16.291 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:16.291 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:16.291 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.291 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.291 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2459780 00:20:16.291 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2459780 /var/tmp/bdevperf.sock 00:20:16.291 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2459780 ']' 00:20:16.291 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.291 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:16.291 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:16.291 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.291 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:16.291 "subsystems": [ 00:20:16.291 { 00:20:16.291 "subsystem": "keyring", 00:20:16.291 "config": [ 00:20:16.291 { 00:20:16.291 "method": "keyring_file_add_key", 00:20:16.291 "params": { 00:20:16.291 "name": "key0", 00:20:16.291 "path": "/tmp/tmp.ktOg8wkXvb" 00:20:16.291 } 00:20:16.291 } 00:20:16.291 ] 00:20:16.291 }, 00:20:16.291 { 00:20:16.291 "subsystem": "iobuf", 00:20:16.291 "config": [ 00:20:16.291 { 00:20:16.291 "method": "iobuf_set_options", 00:20:16.291 "params": { 00:20:16.291 "small_pool_count": 8192, 00:20:16.291 "large_pool_count": 1024, 00:20:16.291 "small_bufsize": 8192, 00:20:16.291 "large_bufsize": 135168 00:20:16.291 } 00:20:16.291 } 00:20:16.291 ] 00:20:16.291 }, 00:20:16.291 { 00:20:16.291 "subsystem": "sock", 00:20:16.291 "config": [ 00:20:16.291 { 00:20:16.291 "method": "sock_set_default_impl", 00:20:16.291 "params": { 00:20:16.291 "impl_name": "posix" 00:20:16.291 } 00:20:16.291 }, 00:20:16.291 { 00:20:16.291 "method": "sock_impl_set_options", 00:20:16.291 "params": { 00:20:16.291 "impl_name": "ssl", 00:20:16.291 "recv_buf_size": 4096, 00:20:16.291 "send_buf_size": 4096, 00:20:16.291 "enable_recv_pipe": true, 00:20:16.291 "enable_quickack": false, 00:20:16.291 "enable_placement_id": 0, 00:20:16.291 "enable_zerocopy_send_server": true, 00:20:16.291 "enable_zerocopy_send_client": false, 00:20:16.291 "zerocopy_threshold": 0, 00:20:16.291 "tls_version": 0, 00:20:16.291 "enable_ktls": false 00:20:16.291 } 00:20:16.291 }, 00:20:16.291 { 00:20:16.291 "method": "sock_impl_set_options", 00:20:16.291 "params": { 00:20:16.291 "impl_name": "posix", 00:20:16.291 "recv_buf_size": 2097152, 00:20:16.291 "send_buf_size": 2097152, 00:20:16.291 "enable_recv_pipe": true, 00:20:16.291 "enable_quickack": false, 00:20:16.291 "enable_placement_id": 0, 00:20:16.291 "enable_zerocopy_send_server": true, 00:20:16.291 "enable_zerocopy_send_client": false, 00:20:16.291 "zerocopy_threshold": 0, 00:20:16.291 "tls_version": 0, 00:20:16.291 "enable_ktls": false 00:20:16.291 } 00:20:16.291 } 00:20:16.291 ] 00:20:16.291 }, 00:20:16.291 { 00:20:16.291 "subsystem": "vmd", 00:20:16.291 "config": [] 00:20:16.291 }, 00:20:16.291 { 00:20:16.291 "subsystem": "accel", 00:20:16.291 "config": [ 00:20:16.291 { 00:20:16.291 "method": "accel_set_options", 00:20:16.291 "params": { 00:20:16.291 "small_cache_size": 128, 00:20:16.291 "large_cache_size": 16, 00:20:16.291 "task_count": 2048, 00:20:16.291 "sequence_count": 2048, 00:20:16.291 "buf_count": 2048 00:20:16.291 } 00:20:16.291 } 00:20:16.291 ] 00:20:16.291 }, 00:20:16.291 { 00:20:16.291 "subsystem": "bdev", 00:20:16.291 "config": [ 00:20:16.291 { 00:20:16.291 "method": "bdev_set_options", 00:20:16.291 "params": { 00:20:16.291 "bdev_io_pool_size": 65535, 00:20:16.291 "bdev_io_cache_size": 256, 00:20:16.291 "bdev_auto_examine": true, 00:20:16.291 "iobuf_small_cache_size": 128, 00:20:16.291 "iobuf_large_cache_size": 16 00:20:16.291 } 00:20:16.291 }, 00:20:16.291 { 00:20:16.292 "method": "bdev_raid_set_options", 00:20:16.292 "params": { 00:20:16.292 "process_window_size_kb": 1024, 00:20:16.292 "process_max_bandwidth_mb_sec": 0 00:20:16.292 } 00:20:16.292 }, 00:20:16.292 { 00:20:16.292 "method": "bdev_iscsi_set_options", 00:20:16.292 "params": { 00:20:16.292 "timeout_sec": 30 00:20:16.292 } 00:20:16.292 }, 00:20:16.292 { 00:20:16.292 "method": "bdev_nvme_set_options", 00:20:16.292 "params": { 00:20:16.292 "action_on_timeout": "none", 00:20:16.292 "timeout_us": 0, 00:20:16.292 "timeout_admin_us": 0, 00:20:16.292 "keep_alive_timeout_ms": 10000, 00:20:16.292 "arbitration_burst": 0, 00:20:16.292 "low_priority_weight": 0, 00:20:16.292 "medium_priority_weight": 0, 00:20:16.292 "high_priority_weight": 0, 00:20:16.292 "nvme_adminq_poll_period_us": 10000, 00:20:16.292 "nvme_ioq_poll_period_us": 0, 00:20:16.292 "io_queue_requests": 512, 00:20:16.292 "delay_cmd_submit": true, 00:20:16.292 "transport_retry_count": 4, 00:20:16.292 "bdev_retry_count": 3, 00:20:16.292 "transport_ack_timeout": 0, 00:20:16.292 "ctrlr_loss_timeout_sec": 0, 00:20:16.292 "reconnect_delay_sec": 0, 00:20:16.292 "fast_io_fail_timeout_sec": 0, 00:20:16.292 "disable_auto_failback": false, 00:20:16.292 "generate_uuids": false, 00:20:16.292 "transport_tos": 0, 00:20:16.292 "nvme_error_stat": false, 00:20:16.292 "rdma_srq_size": 0, 00:20:16.292 "io_path_stat": false, 00:20:16.292 "allow_accel_sequence": false, 00:20:16.292 "rdma_max_cq_size": 0, 00:20:16.292 "rdma_cm_event_timeout_ms": 0, 00:20:16.292 "dhchap_digests": [ 00:20:16.292 "sha256", 00:20:16.292 "sha384", 00:20:16.292 "sha512" 00:20:16.292 ], 00:20:16.292 "dhchap_dhgroups": [ 00:20:16.292 "null", 00:20:16.292 "ffdhe2048", 00:20:16.292 "ffdhe3072", 00:20:16.292 "ffdhe4096", 00:20:16.292 "ffdhe6144", 00:20:16.292 "ffdhe8192" 00:20:16.292 ] 00:20:16.292 } 00:20:16.292 }, 00:20:16.292 { 00:20:16.292 "method": "bdev_nvme_attach_controller", 00:20:16.292 "params": { 00:20:16.292 "name": "TLSTEST", 00:20:16.292 "trtype": "TCP", 00:20:16.292 "adrfam": "IPv4", 00:20:16.292 "traddr": "10.0.0.2", 00:20:16.292 "trsvcid": "4420", 00:20:16.292 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.292 "prchk_reftag": false, 00:20:16.292 "prchk_guard": false, 00:20:16.292 "ctrlr_loss_timeout_sec": 0, 00:20:16.292 "reconnect_delay_sec": 0, 00:20:16.292 "fast_io_fail_timeout_sec": 0, 00:20:16.292 "psk": "key0", 00:20:16.292 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.292 "hdgst": false, 00:20:16.292 "ddgst": false, 00:20:16.292 "multipath": "multipath" 00:20:16.292 } 00:20:16.292 }, 00:20:16.292 { 00:20:16.292 "method": "bdev_nvme_set_hotplug", 00:20:16.292 "params": { 00:20:16.292 "period_us": 100000, 00:20:16.292 "enable": false 00:20:16.292 } 00:20:16.292 }, 00:20:16.292 { 00:20:16.292 "method": "bdev_wait_for_examine" 00:20:16.292 } 00:20:16.292 ] 00:20:16.292 }, 00:20:16.292 { 00:20:16.292 "subsystem": "nbd", 00:20:16.292 "config": [] 00:20:16.292 } 00:20:16.292 ] 00:20:16.292 }' 00:20:16.550 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:16.550 15:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.550 [2024-10-01 15:54:26.522459] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:16.550 [2024-10-01 15:54:26.522504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2459780 ] 00:20:16.550 [2024-10-01 15:54:26.590348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.550 [2024-10-01 15:54:26.662276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.808 [2024-10-01 15:54:26.814026] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.374 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:17.374 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:17.374 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:17.374 Running I/O for 10 seconds... 00:20:27.609 5429.00 IOPS, 21.21 MiB/s 5510.00 IOPS, 21.52 MiB/s 5509.00 IOPS, 21.52 MiB/s 5527.25 IOPS, 21.59 MiB/s 5550.20 IOPS, 21.68 MiB/s 5573.33 IOPS, 21.77 MiB/s 5567.43 IOPS, 21.75 MiB/s 5560.62 IOPS, 21.72 MiB/s 5572.78 IOPS, 21.77 MiB/s 5567.70 IOPS, 21.75 MiB/s 00:20:27.609 Latency(us) 00:20:27.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.609 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:27.609 Verification LBA range: start 0x0 length 0x2000 00:20:27.609 TLSTESTn1 : 10.02 5570.08 21.76 0.00 0.00 22942.67 6303.94 24092.28 00:20:27.609 =================================================================================================================== 00:20:27.609 Total : 5570.08 21.76 0.00 0.00 22942.67 6303.94 24092.28 00:20:27.609 { 00:20:27.609 "results": [ 00:20:27.609 { 00:20:27.609 "job": "TLSTESTn1", 00:20:27.609 "core_mask": "0x4", 00:20:27.609 "workload": "verify", 00:20:27.609 "status": "finished", 00:20:27.610 "verify_range": { 00:20:27.610 "start": 0, 00:20:27.610 "length": 8192 00:20:27.610 }, 00:20:27.610 "queue_depth": 128, 00:20:27.610 "io_size": 4096, 00:20:27.610 "runtime": 10.018529, 00:20:27.610 "iops": 5570.079200249857, 00:20:27.610 "mibps": 21.758121875976006, 00:20:27.610 "io_failed": 0, 00:20:27.610 "io_timeout": 0, 00:20:27.610 "avg_latency_us": 22942.6725523004, 00:20:27.610 "min_latency_us": 6303.939047619047, 00:20:27.610 "max_latency_us": 24092.281904761905 00:20:27.610 } 00:20:27.610 ], 00:20:27.610 "core_count": 1 00:20:27.610 } 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2459780 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2459780 ']' 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2459780 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2459780 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2459780' 00:20:27.610 killing process with pid 2459780 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2459780 00:20:27.610 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.610 00:20:27.610 Latency(us) 00:20:27.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.610 =================================================================================================================== 00:20:27.610 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2459780 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2459536 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2459536 ']' 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2459536 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2459536 00:20:27.610 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:27.869 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:27.869 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2459536' 00:20:27.869 killing process with pid 2459536 00:20:27.869 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2459536 00:20:27.869 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2459536 00:20:27.869 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:27.869 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:27.869 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:27.869 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.869 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2461620 00:20:27.869 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:27.869 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2461620 00:20:27.869 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2461620 ']' 00:20:27.869 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.869 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:27.869 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.869 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:27.869 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.869 [2024-10-01 15:54:38.053386] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:27.870 [2024-10-01 15:54:38.053434] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.128 [2024-10-01 15:54:38.123246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.128 [2024-10-01 15:54:38.198826] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.128 [2024-10-01 15:54:38.198870] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.128 [2024-10-01 15:54:38.198878] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.128 [2024-10-01 15:54:38.198885] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.128 [2024-10-01 15:54:38.198890] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.128 [2024-10-01 15:54:38.198909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.065 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:29.065 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:29.065 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:29.065 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:29.065 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.065 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.065 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ktOg8wkXvb 00:20:29.065 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ktOg8wkXvb 00:20:29.065 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:29.065 [2024-10-01 15:54:39.104519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.065 15:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:29.323 15:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:29.323 [2024-10-01 15:54:39.461444] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:29.323 [2024-10-01 15:54:39.461661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.323 15:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:29.644 malloc0 00:20:29.644 15:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:29.903 15:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ktOg8wkXvb 00:20:29.903 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:30.162 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2461991 00:20:30.162 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:30.162 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:30.162 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2461991 /var/tmp/bdevperf.sock 00:20:30.162 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2461991 ']' 00:20:30.162 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.162 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:30.162 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.162 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:30.162 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.162 [2024-10-01 15:54:40.268887] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:30.162 [2024-10-01 15:54:40.268937] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2461991 ] 00:20:30.162 [2024-10-01 15:54:40.334385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.421 [2024-10-01 15:54:40.413550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.988 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:30.988 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:30.989 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ktOg8wkXvb 00:20:31.248 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:31.509 [2024-10-01 15:54:41.461859] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.509 nvme0n1 00:20:31.509 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:31.509 Running I/O for 1 seconds... 00:20:32.885 5285.00 IOPS, 20.64 MiB/s 00:20:32.885 Latency(us) 00:20:32.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.885 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:32.885 Verification LBA range: start 0x0 length 0x2000 00:20:32.885 nvme0n1 : 1.01 5344.08 20.88 0.00 0.00 23795.68 4899.60 22968.81 00:20:32.885 =================================================================================================================== 00:20:32.885 Total : 5344.08 20.88 0.00 0.00 23795.68 4899.60 22968.81 00:20:32.885 { 00:20:32.885 "results": [ 00:20:32.885 { 00:20:32.885 "job": "nvme0n1", 00:20:32.885 "core_mask": "0x2", 00:20:32.885 "workload": "verify", 00:20:32.885 "status": "finished", 00:20:32.885 "verify_range": { 00:20:32.885 "start": 0, 00:20:32.885 "length": 8192 00:20:32.885 }, 00:20:32.885 "queue_depth": 128, 00:20:32.885 "io_size": 4096, 00:20:32.885 "runtime": 1.012897, 00:20:32.885 "iops": 5344.077433342186, 00:20:32.885 "mibps": 20.875302473992914, 00:20:32.885 "io_failed": 0, 00:20:32.885 "io_timeout": 0, 00:20:32.885 "avg_latency_us": 23795.676921696446, 00:20:32.885 "min_latency_us": 4899.596190476191, 00:20:32.885 "max_latency_us": 22968.80761904762 00:20:32.885 } 00:20:32.885 ], 00:20:32.885 "core_count": 1 00:20:32.885 } 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2461991 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2461991 ']' 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2461991 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2461991 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2461991' 00:20:32.885 killing process with pid 2461991 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2461991 00:20:32.885 Received shutdown signal, test time was about 1.000000 seconds 00:20:32.885 00:20:32.885 Latency(us) 00:20:32.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.885 =================================================================================================================== 00:20:32.885 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2461991 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2461620 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2461620 ']' 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2461620 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.885 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2461620 00:20:32.886 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:32.886 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:32.886 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2461620' 00:20:32.886 killing process with pid 2461620 00:20:32.886 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2461620 00:20:32.886 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2461620 00:20:33.145 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:33.145 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:33.145 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:33.145 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.145 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2462539 00:20:33.145 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:33.145 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2462539 00:20:33.145 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2462539 ']' 00:20:33.145 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.145 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:33.145 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.145 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:33.145 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.145 [2024-10-01 15:54:43.239975] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:33.145 [2024-10-01 15:54:43.240024] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.145 [2024-10-01 15:54:43.311484] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.403 [2024-10-01 15:54:43.379492] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.403 [2024-10-01 15:54:43.379530] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.403 [2024-10-01 15:54:43.379537] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.403 [2024-10-01 15:54:43.379543] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.403 [2024-10-01 15:54:43.379548] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.403 [2024-10-01 15:54:43.379565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.971 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:33.971 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:33.971 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:33.971 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:33.971 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.971 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.971 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:33.971 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.971 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.971 [2024-10-01 15:54:44.112067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.971 malloc0 00:20:33.971 [2024-10-01 15:54:44.149256] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:33.971 [2024-10-01 15:54:44.149470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.230 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.230 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2462609 00:20:34.230 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2462609 /var/tmp/bdevperf.sock 00:20:34.230 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:34.230 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2462609 ']' 00:20:34.230 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.230 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:34.230 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.230 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:34.230 15:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.230 [2024-10-01 15:54:44.225376] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:34.230 [2024-10-01 15:54:44.225419] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2462609 ] 00:20:34.230 [2024-10-01 15:54:44.291830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.230 [2024-10-01 15:54:44.375487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.166 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:35.166 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:35.166 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ktOg8wkXvb 00:20:35.166 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:35.425 [2024-10-01 15:54:45.417408] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.425 nvme0n1 00:20:35.425 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:35.425 Running I/O for 1 seconds... 00:20:36.800 5296.00 IOPS, 20.69 MiB/s 00:20:36.800 Latency(us) 00:20:36.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.801 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:36.801 Verification LBA range: start 0x0 length 0x2000 00:20:36.801 nvme0n1 : 1.01 5347.46 20.89 0.00 0.00 23782.02 5118.05 26464.06 00:20:36.801 =================================================================================================================== 00:20:36.801 Total : 5347.46 20.89 0.00 0.00 23782.02 5118.05 26464.06 00:20:36.801 { 00:20:36.801 "results": [ 00:20:36.801 { 00:20:36.801 "job": "nvme0n1", 00:20:36.801 "core_mask": "0x2", 00:20:36.801 "workload": "verify", 00:20:36.801 "status": "finished", 00:20:36.801 "verify_range": { 00:20:36.801 "start": 0, 00:20:36.801 "length": 8192 00:20:36.801 }, 00:20:36.801 "queue_depth": 128, 00:20:36.801 "io_size": 4096, 00:20:36.801 "runtime": 1.014313, 00:20:36.801 "iops": 5347.46177954931, 00:20:36.801 "mibps": 20.888522576364494, 00:20:36.801 "io_failed": 0, 00:20:36.801 "io_timeout": 0, 00:20:36.801 "avg_latency_us": 23782.020980474783, 00:20:36.801 "min_latency_us": 5118.049523809524, 00:20:36.801 "max_latency_us": 26464.06095238095 00:20:36.801 } 00:20:36.801 ], 00:20:36.801 "core_count": 1 00:20:36.801 } 00:20:36.801 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:36.801 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.801 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.801 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.801 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:36.801 "subsystems": [ 00:20:36.801 { 00:20:36.801 "subsystem": "keyring", 00:20:36.801 "config": [ 00:20:36.801 { 00:20:36.801 "method": "keyring_file_add_key", 00:20:36.801 "params": { 00:20:36.801 "name": "key0", 00:20:36.801 "path": "/tmp/tmp.ktOg8wkXvb" 00:20:36.801 } 00:20:36.801 } 00:20:36.801 ] 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "subsystem": "iobuf", 00:20:36.801 "config": [ 00:20:36.801 { 00:20:36.801 "method": "iobuf_set_options", 00:20:36.801 "params": { 00:20:36.801 "small_pool_count": 8192, 00:20:36.801 "large_pool_count": 1024, 00:20:36.801 "small_bufsize": 8192, 00:20:36.801 "large_bufsize": 135168 00:20:36.801 } 00:20:36.801 } 00:20:36.801 ] 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "subsystem": "sock", 00:20:36.801 "config": [ 00:20:36.801 { 00:20:36.801 "method": "sock_set_default_impl", 00:20:36.801 "params": { 00:20:36.801 "impl_name": "posix" 00:20:36.801 } 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "method": "sock_impl_set_options", 00:20:36.801 "params": { 00:20:36.801 "impl_name": "ssl", 00:20:36.801 "recv_buf_size": 4096, 00:20:36.801 "send_buf_size": 4096, 00:20:36.801 "enable_recv_pipe": true, 00:20:36.801 "enable_quickack": false, 00:20:36.801 "enable_placement_id": 0, 00:20:36.801 "enable_zerocopy_send_server": true, 00:20:36.801 "enable_zerocopy_send_client": false, 00:20:36.801 "zerocopy_threshold": 0, 00:20:36.801 "tls_version": 0, 00:20:36.801 "enable_ktls": false 00:20:36.801 } 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "method": "sock_impl_set_options", 00:20:36.801 "params": { 00:20:36.801 "impl_name": "posix", 00:20:36.801 "recv_buf_size": 2097152, 00:20:36.801 "send_buf_size": 2097152, 00:20:36.801 "enable_recv_pipe": true, 00:20:36.801 "enable_quickack": false, 00:20:36.801 "enable_placement_id": 0, 00:20:36.801 "enable_zerocopy_send_server": true, 00:20:36.801 "enable_zerocopy_send_client": false, 00:20:36.801 "zerocopy_threshold": 0, 00:20:36.801 "tls_version": 0, 00:20:36.801 "enable_ktls": false 00:20:36.801 } 00:20:36.801 } 00:20:36.801 ] 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "subsystem": "vmd", 00:20:36.801 "config": [] 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "subsystem": "accel", 00:20:36.801 "config": [ 00:20:36.801 { 00:20:36.801 "method": "accel_set_options", 00:20:36.801 "params": { 00:20:36.801 "small_cache_size": 128, 00:20:36.801 "large_cache_size": 16, 00:20:36.801 "task_count": 2048, 00:20:36.801 "sequence_count": 2048, 00:20:36.801 "buf_count": 2048 00:20:36.801 } 00:20:36.801 } 00:20:36.801 ] 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "subsystem": "bdev", 00:20:36.801 "config": [ 00:20:36.801 { 00:20:36.801 "method": "bdev_set_options", 00:20:36.801 "params": { 00:20:36.801 "bdev_io_pool_size": 65535, 00:20:36.801 "bdev_io_cache_size": 256, 00:20:36.801 "bdev_auto_examine": true, 00:20:36.801 "iobuf_small_cache_size": 128, 00:20:36.801 "iobuf_large_cache_size": 16 00:20:36.801 } 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "method": "bdev_raid_set_options", 00:20:36.801 "params": { 00:20:36.801 "process_window_size_kb": 1024, 00:20:36.801 "process_max_bandwidth_mb_sec": 0 00:20:36.801 } 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "method": "bdev_iscsi_set_options", 00:20:36.801 "params": { 00:20:36.801 "timeout_sec": 30 00:20:36.801 } 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "method": "bdev_nvme_set_options", 00:20:36.801 "params": { 00:20:36.801 "action_on_timeout": "none", 00:20:36.801 "timeout_us": 0, 00:20:36.801 "timeout_admin_us": 0, 00:20:36.801 "keep_alive_timeout_ms": 10000, 00:20:36.801 "arbitration_burst": 0, 00:20:36.801 "low_priority_weight": 0, 00:20:36.801 "medium_priority_weight": 0, 00:20:36.801 "high_priority_weight": 0, 00:20:36.801 "nvme_adminq_poll_period_us": 10000, 00:20:36.801 "nvme_ioq_poll_period_us": 0, 00:20:36.801 "io_queue_requests": 0, 00:20:36.801 "delay_cmd_submit": true, 00:20:36.801 "transport_retry_count": 4, 00:20:36.801 "bdev_retry_count": 3, 00:20:36.801 "transport_ack_timeout": 0, 00:20:36.801 "ctrlr_loss_timeout_sec": 0, 00:20:36.801 "reconnect_delay_sec": 0, 00:20:36.801 "fast_io_fail_timeout_sec": 0, 00:20:36.801 "disable_auto_failback": false, 00:20:36.801 "generate_uuids": false, 00:20:36.801 "transport_tos": 0, 00:20:36.801 "nvme_error_stat": false, 00:20:36.801 "rdma_srq_size": 0, 00:20:36.801 "io_path_stat": false, 00:20:36.801 "allow_accel_sequence": false, 00:20:36.801 "rdma_max_cq_size": 0, 00:20:36.801 "rdma_cm_event_timeout_ms": 0, 00:20:36.801 "dhchap_digests": [ 00:20:36.801 "sha256", 00:20:36.801 "sha384", 00:20:36.801 "sha512" 00:20:36.801 ], 00:20:36.801 "dhchap_dhgroups": [ 00:20:36.801 "null", 00:20:36.801 "ffdhe2048", 00:20:36.801 "ffdhe3072", 00:20:36.801 "ffdhe4096", 00:20:36.801 "ffdhe6144", 00:20:36.801 "ffdhe8192" 00:20:36.801 ] 00:20:36.801 } 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "method": "bdev_nvme_set_hotplug", 00:20:36.801 "params": { 00:20:36.801 "period_us": 100000, 00:20:36.801 "enable": false 00:20:36.801 } 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "method": "bdev_malloc_create", 00:20:36.801 "params": { 00:20:36.801 "name": "malloc0", 00:20:36.801 "num_blocks": 8192, 00:20:36.801 "block_size": 4096, 00:20:36.801 "physical_block_size": 4096, 00:20:36.801 "uuid": "bef3d17b-648a-41a7-bb4f-571cef4cf074", 00:20:36.801 "optimal_io_boundary": 0, 00:20:36.801 "md_size": 0, 00:20:36.801 "dif_type": 0, 00:20:36.801 "dif_is_head_of_md": false, 00:20:36.801 "dif_pi_format": 0 00:20:36.801 } 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "method": "bdev_wait_for_examine" 00:20:36.801 } 00:20:36.801 ] 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "subsystem": "nbd", 00:20:36.801 "config": [] 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "subsystem": "scheduler", 00:20:36.801 "config": [ 00:20:36.801 { 00:20:36.801 "method": "framework_set_scheduler", 00:20:36.801 "params": { 00:20:36.801 "name": "static" 00:20:36.801 } 00:20:36.801 } 00:20:36.801 ] 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "subsystem": "nvmf", 00:20:36.801 "config": [ 00:20:36.801 { 00:20:36.801 "method": "nvmf_set_config", 00:20:36.801 "params": { 00:20:36.801 "discovery_filter": "match_any", 00:20:36.801 "admin_cmd_passthru": { 00:20:36.801 "identify_ctrlr": false 00:20:36.801 }, 00:20:36.801 "dhchap_digests": [ 00:20:36.801 "sha256", 00:20:36.801 "sha384", 00:20:36.801 "sha512" 00:20:36.801 ], 00:20:36.801 "dhchap_dhgroups": [ 00:20:36.801 "null", 00:20:36.801 "ffdhe2048", 00:20:36.801 "ffdhe3072", 00:20:36.801 "ffdhe4096", 00:20:36.801 "ffdhe6144", 00:20:36.801 "ffdhe8192" 00:20:36.801 ] 00:20:36.801 } 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "method": "nvmf_set_max_subsystems", 00:20:36.801 "params": { 00:20:36.801 "max_subsystems": 1024 00:20:36.801 } 00:20:36.801 }, 00:20:36.801 { 00:20:36.801 "method": "nvmf_set_crdt", 00:20:36.801 "params": { 00:20:36.801 "crdt1": 0, 00:20:36.801 "crdt2": 0, 00:20:36.801 "crdt3": 0 00:20:36.801 } 00:20:36.801 }, 00:20:36.802 { 00:20:36.802 "method": "nvmf_create_transport", 00:20:36.802 "params": { 00:20:36.802 "trtype": "TCP", 00:20:36.802 "max_queue_depth": 128, 00:20:36.802 "max_io_qpairs_per_ctrlr": 127, 00:20:36.802 "in_capsule_data_size": 4096, 00:20:36.802 "max_io_size": 131072, 00:20:36.802 "io_unit_size": 131072, 00:20:36.802 "max_aq_depth": 128, 00:20:36.802 "num_shared_buffers": 511, 00:20:36.802 "buf_cache_size": 4294967295, 00:20:36.802 "dif_insert_or_strip": false, 00:20:36.802 "zcopy": false, 00:20:36.802 "c2h_success": false, 00:20:36.802 "sock_priority": 0, 00:20:36.802 "abort_timeout_sec": 1, 00:20:36.802 "ack_timeout": 0, 00:20:36.802 "data_wr_pool_size": 0 00:20:36.802 } 00:20:36.802 }, 00:20:36.802 { 00:20:36.802 "method": "nvmf_create_subsystem", 00:20:36.802 "params": { 00:20:36.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.802 "allow_any_host": false, 00:20:36.802 "serial_number": "00000000000000000000", 00:20:36.802 "model_number": "SPDK bdev Controller", 00:20:36.802 "max_namespaces": 32, 00:20:36.802 "min_cntlid": 1, 00:20:36.802 "max_cntlid": 65519, 00:20:36.802 "ana_reporting": false 00:20:36.802 } 00:20:36.802 }, 00:20:36.802 { 00:20:36.802 "method": "nvmf_subsystem_add_host", 00:20:36.802 "params": { 00:20:36.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.802 "host": "nqn.2016-06.io.spdk:host1", 00:20:36.802 "psk": "key0" 00:20:36.802 } 00:20:36.802 }, 00:20:36.802 { 00:20:36.802 "method": "nvmf_subsystem_add_ns", 00:20:36.802 "params": { 00:20:36.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.802 "namespace": { 00:20:36.802 "nsid": 1, 00:20:36.802 "bdev_name": "malloc0", 00:20:36.802 "nguid": "BEF3D17B648A41A7BB4F571CEF4CF074", 00:20:36.802 "uuid": "bef3d17b-648a-41a7-bb4f-571cef4cf074", 00:20:36.802 "no_auto_visible": false 00:20:36.802 } 00:20:36.802 } 00:20:36.802 }, 00:20:36.802 { 00:20:36.802 "method": "nvmf_subsystem_add_listener", 00:20:36.802 "params": { 00:20:36.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.802 "listen_address": { 00:20:36.802 "trtype": "TCP", 00:20:36.802 "adrfam": "IPv4", 00:20:36.802 "traddr": "10.0.0.2", 00:20:36.802 "trsvcid": "4420" 00:20:36.802 }, 00:20:36.802 "secure_channel": false, 00:20:36.802 "sock_impl": "ssl" 00:20:36.802 } 00:20:36.802 } 00:20:36.802 ] 00:20:36.802 } 00:20:36.802 ] 00:20:36.802 }' 00:20:36.802 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:37.060 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:37.060 "subsystems": [ 00:20:37.060 { 00:20:37.060 "subsystem": "keyring", 00:20:37.060 "config": [ 00:20:37.060 { 00:20:37.060 "method": "keyring_file_add_key", 00:20:37.060 "params": { 00:20:37.060 "name": "key0", 00:20:37.060 "path": "/tmp/tmp.ktOg8wkXvb" 00:20:37.060 } 00:20:37.060 } 00:20:37.060 ] 00:20:37.060 }, 00:20:37.060 { 00:20:37.060 "subsystem": "iobuf", 00:20:37.060 "config": [ 00:20:37.060 { 00:20:37.060 "method": "iobuf_set_options", 00:20:37.060 "params": { 00:20:37.060 "small_pool_count": 8192, 00:20:37.060 "large_pool_count": 1024, 00:20:37.060 "small_bufsize": 8192, 00:20:37.060 "large_bufsize": 135168 00:20:37.060 } 00:20:37.060 } 00:20:37.060 ] 00:20:37.060 }, 00:20:37.060 { 00:20:37.060 "subsystem": "sock", 00:20:37.060 "config": [ 00:20:37.060 { 00:20:37.060 "method": "sock_set_default_impl", 00:20:37.060 "params": { 00:20:37.060 "impl_name": "posix" 00:20:37.060 } 00:20:37.060 }, 00:20:37.061 { 00:20:37.061 "method": "sock_impl_set_options", 00:20:37.061 "params": { 00:20:37.061 "impl_name": "ssl", 00:20:37.061 "recv_buf_size": 4096, 00:20:37.061 "send_buf_size": 4096, 00:20:37.061 "enable_recv_pipe": true, 00:20:37.061 "enable_quickack": false, 00:20:37.061 "enable_placement_id": 0, 00:20:37.061 "enable_zerocopy_send_server": true, 00:20:37.061 "enable_zerocopy_send_client": false, 00:20:37.061 "zerocopy_threshold": 0, 00:20:37.061 "tls_version": 0, 00:20:37.061 "enable_ktls": false 00:20:37.061 } 00:20:37.061 }, 00:20:37.061 { 00:20:37.061 "method": "sock_impl_set_options", 00:20:37.061 "params": { 00:20:37.061 "impl_name": "posix", 00:20:37.061 "recv_buf_size": 2097152, 00:20:37.061 "send_buf_size": 2097152, 00:20:37.061 "enable_recv_pipe": true, 00:20:37.061 "enable_quickack": false, 00:20:37.061 "enable_placement_id": 0, 00:20:37.061 "enable_zerocopy_send_server": true, 00:20:37.061 "enable_zerocopy_send_client": false, 00:20:37.061 "zerocopy_threshold": 0, 00:20:37.061 "tls_version": 0, 00:20:37.061 "enable_ktls": false 00:20:37.061 } 00:20:37.061 } 00:20:37.061 ] 00:20:37.061 }, 00:20:37.061 { 00:20:37.061 "subsystem": "vmd", 00:20:37.061 "config": [] 00:20:37.061 }, 00:20:37.061 { 00:20:37.061 "subsystem": "accel", 00:20:37.061 "config": [ 00:20:37.061 { 00:20:37.061 "method": "accel_set_options", 00:20:37.061 "params": { 00:20:37.061 "small_cache_size": 128, 00:20:37.061 "large_cache_size": 16, 00:20:37.061 "task_count": 2048, 00:20:37.061 "sequence_count": 2048, 00:20:37.061 "buf_count": 2048 00:20:37.061 } 00:20:37.061 } 00:20:37.061 ] 00:20:37.061 }, 00:20:37.061 { 00:20:37.061 "subsystem": "bdev", 00:20:37.061 "config": [ 00:20:37.061 { 00:20:37.061 "method": "bdev_set_options", 00:20:37.061 "params": { 00:20:37.061 "bdev_io_pool_size": 65535, 00:20:37.061 "bdev_io_cache_size": 256, 00:20:37.061 "bdev_auto_examine": true, 00:20:37.061 "iobuf_small_cache_size": 128, 00:20:37.061 "iobuf_large_cache_size": 16 00:20:37.061 } 00:20:37.061 }, 00:20:37.061 { 00:20:37.061 "method": "bdev_raid_set_options", 00:20:37.061 "params": { 00:20:37.061 "process_window_size_kb": 1024, 00:20:37.061 "process_max_bandwidth_mb_sec": 0 00:20:37.061 } 00:20:37.061 }, 00:20:37.061 { 00:20:37.061 "method": "bdev_iscsi_set_options", 00:20:37.061 "params": { 00:20:37.061 "timeout_sec": 30 00:20:37.061 } 00:20:37.061 }, 00:20:37.061 { 00:20:37.061 "method": "bdev_nvme_set_options", 00:20:37.061 "params": { 00:20:37.061 "action_on_timeout": "none", 00:20:37.061 "timeout_us": 0, 00:20:37.061 "timeout_admin_us": 0, 00:20:37.061 "keep_alive_timeout_ms": 10000, 00:20:37.061 "arbitration_burst": 0, 00:20:37.061 "low_priority_weight": 0, 00:20:37.061 "medium_priority_weight": 0, 00:20:37.061 "high_priority_weight": 0, 00:20:37.061 "nvme_adminq_poll_period_us": 10000, 00:20:37.061 "nvme_ioq_poll_period_us": 0, 00:20:37.061 "io_queue_requests": 512, 00:20:37.061 "delay_cmd_submit": true, 00:20:37.061 "transport_retry_count": 4, 00:20:37.061 "bdev_retry_count": 3, 00:20:37.061 "transport_ack_timeout": 0, 00:20:37.061 "ctrlr_loss_timeout_sec": 0, 00:20:37.061 "reconnect_delay_sec": 0, 00:20:37.061 "fast_io_fail_timeout_sec": 0, 00:20:37.061 "disable_auto_failback": false, 00:20:37.061 "generate_uuids": false, 00:20:37.061 "transport_tos": 0, 00:20:37.061 "nvme_error_stat": false, 00:20:37.061 "rdma_srq_size": 0, 00:20:37.061 "io_path_stat": false, 00:20:37.061 "allow_accel_sequence": false, 00:20:37.061 "rdma_max_cq_size": 0, 00:20:37.061 "rdma_cm_event_timeout_ms": 0, 00:20:37.061 "dhchap_digests": [ 00:20:37.061 "sha256", 00:20:37.061 "sha384", 00:20:37.061 "sha512" 00:20:37.061 ], 00:20:37.061 "dhchap_dhgroups": [ 00:20:37.061 "null", 00:20:37.061 "ffdhe2048", 00:20:37.061 "ffdhe3072", 00:20:37.061 "ffdhe4096", 00:20:37.061 "ffdhe6144", 00:20:37.061 "ffdhe8192" 00:20:37.061 ] 00:20:37.061 } 00:20:37.061 }, 00:20:37.061 { 00:20:37.061 "method": "bdev_nvme_attach_controller", 00:20:37.061 "params": { 00:20:37.061 "name": "nvme0", 00:20:37.061 "trtype": "TCP", 00:20:37.061 "adrfam": "IPv4", 00:20:37.061 "traddr": "10.0.0.2", 00:20:37.061 "trsvcid": "4420", 00:20:37.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.061 "prchk_reftag": false, 00:20:37.061 "prchk_guard": false, 00:20:37.061 "ctrlr_loss_timeout_sec": 0, 00:20:37.061 "reconnect_delay_sec": 0, 00:20:37.061 "fast_io_fail_timeout_sec": 0, 00:20:37.061 "psk": "key0", 00:20:37.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.061 "hdgst": false, 00:20:37.061 "ddgst": false, 00:20:37.061 "multipath": "multipath" 00:20:37.061 } 00:20:37.061 }, 00:20:37.061 { 00:20:37.061 "method": "bdev_nvme_set_hotplug", 00:20:37.061 "params": { 00:20:37.061 "period_us": 100000, 00:20:37.061 "enable": false 00:20:37.061 } 00:20:37.061 }, 00:20:37.061 { 00:20:37.061 "method": "bdev_enable_histogram", 00:20:37.061 "params": { 00:20:37.061 "name": "nvme0n1", 00:20:37.061 "enable": true 00:20:37.061 } 00:20:37.061 }, 00:20:37.061 { 00:20:37.061 "method": "bdev_wait_for_examine" 00:20:37.061 } 00:20:37.061 ] 00:20:37.061 }, 00:20:37.061 { 00:20:37.061 "subsystem": "nbd", 00:20:37.061 "config": [] 00:20:37.061 } 00:20:37.061 ] 00:20:37.061 }' 00:20:37.061 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2462609 00:20:37.061 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2462609 ']' 00:20:37.061 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2462609 00:20:37.061 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:37.061 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:37.061 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2462609 00:20:37.061 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:37.061 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:37.061 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2462609' 00:20:37.061 killing process with pid 2462609 00:20:37.061 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2462609 00:20:37.061 Received shutdown signal, test time was about 1.000000 seconds 00:20:37.061 00:20:37.061 Latency(us) 00:20:37.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.061 =================================================================================================================== 00:20:37.061 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.061 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2462609 00:20:37.061 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2462539 00:20:37.061 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2462539 ']' 00:20:37.061 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2462539 00:20:37.062 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:37.320 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:37.320 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2462539 00:20:37.320 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:37.321 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:37.321 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2462539' 00:20:37.321 killing process with pid 2462539 00:20:37.321 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2462539 00:20:37.321 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2462539 00:20:37.321 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:37.321 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:37.321 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:37.321 "subsystems": [ 00:20:37.321 { 00:20:37.321 "subsystem": "keyring", 00:20:37.321 "config": [ 00:20:37.321 { 00:20:37.321 "method": "keyring_file_add_key", 00:20:37.321 "params": { 00:20:37.321 "name": "key0", 00:20:37.321 "path": "/tmp/tmp.ktOg8wkXvb" 00:20:37.321 } 00:20:37.321 } 00:20:37.321 ] 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "subsystem": "iobuf", 00:20:37.321 "config": [ 00:20:37.321 { 00:20:37.321 "method": "iobuf_set_options", 00:20:37.321 "params": { 00:20:37.321 "small_pool_count": 8192, 00:20:37.321 "large_pool_count": 1024, 00:20:37.321 "small_bufsize": 8192, 00:20:37.321 "large_bufsize": 135168 00:20:37.321 } 00:20:37.321 } 00:20:37.321 ] 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "subsystem": "sock", 00:20:37.321 "config": [ 00:20:37.321 { 00:20:37.321 "method": "sock_set_default_impl", 00:20:37.321 "params": { 00:20:37.321 "impl_name": "posix" 00:20:37.321 } 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "method": "sock_impl_set_options", 00:20:37.321 "params": { 00:20:37.321 "impl_name": "ssl", 00:20:37.321 "recv_buf_size": 4096, 00:20:37.321 "send_buf_size": 4096, 00:20:37.321 "enable_recv_pipe": true, 00:20:37.321 "enable_quickack": false, 00:20:37.321 "enable_placement_id": 0, 00:20:37.321 "enable_zerocopy_send_server": true, 00:20:37.321 "enable_zerocopy_send_client": false, 00:20:37.321 "zerocopy_threshold": 0, 00:20:37.321 "tls_version": 0, 00:20:37.321 "enable_ktls": false 00:20:37.321 } 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "method": "sock_impl_set_options", 00:20:37.321 "params": { 00:20:37.321 "impl_name": "posix", 00:20:37.321 "recv_buf_size": 2097152, 00:20:37.321 "send_buf_size": 2097152, 00:20:37.321 "enable_recv_pipe": true, 00:20:37.321 "enable_quickack": false, 00:20:37.321 "enable_placement_id": 0, 00:20:37.321 "enable_zerocopy_send_server": true, 00:20:37.321 "enable_zerocopy_send_client": false, 00:20:37.321 "zerocopy_threshold": 0, 00:20:37.321 "tls_version": 0, 00:20:37.321 "enable_ktls": false 00:20:37.321 } 00:20:37.321 } 00:20:37.321 ] 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "subsystem": "vmd", 00:20:37.321 "config": [] 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "subsystem": "accel", 00:20:37.321 "config": [ 00:20:37.321 { 00:20:37.321 "method": "accel_set_options", 00:20:37.321 "params": { 00:20:37.321 "small_cache_size": 128, 00:20:37.321 "large_cache_size": 16, 00:20:37.321 "task_count": 2048, 00:20:37.321 "sequence_count": 2048, 00:20:37.321 "buf_count": 2048 00:20:37.321 } 00:20:37.321 } 00:20:37.321 ] 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "subsystem": "bdev", 00:20:37.321 "config": [ 00:20:37.321 { 00:20:37.321 "method": "bdev_set_options", 00:20:37.321 "params": { 00:20:37.321 "bdev_io_pool_size": 65535, 00:20:37.321 "bdev_io_cache_size": 256, 00:20:37.321 "bdev_auto_examine": true, 00:20:37.321 "iobuf_small_cache_size": 128, 00:20:37.321 "iobuf_large_cache_size": 16 00:20:37.321 } 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "method": "bdev_raid_set_options", 00:20:37.321 "params": { 00:20:37.321 "process_window_size_kb": 1024, 00:20:37.321 "process_max_bandwidth_mb_sec": 0 00:20:37.321 } 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "method": "bdev_iscsi_set_options", 00:20:37.321 "params": { 00:20:37.321 "timeout_sec": 30 00:20:37.321 } 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "method": "bdev_nvme_set_options", 00:20:37.321 "params": { 00:20:37.321 "action_on_timeout": "none", 00:20:37.321 "timeout_us": 0, 00:20:37.321 "timeout_admin_us": 0, 00:20:37.321 "keep_alive_timeout_ms": 10000, 00:20:37.321 "arbitration_burst": 0, 00:20:37.321 "low_priority_weight": 0, 00:20:37.321 "medium_priority_weight": 0, 00:20:37.321 "high_priority_weight": 0, 00:20:37.321 "nvme_adminq_poll_period_us": 10000, 00:20:37.321 "nvme_ioq_poll_period_us": 0, 00:20:37.321 "io_queue_requests": 0, 00:20:37.321 "delay_cmd_submit": true, 00:20:37.321 "transport_retry_count": 4, 00:20:37.321 "bdev_retry_count": 3, 00:20:37.321 "transport_ack_timeout": 0, 00:20:37.321 "ctrlr_loss_timeout_sec": 0, 00:20:37.321 "reconnect_delay_sec": 0, 00:20:37.321 "fast_io_fail_timeout_sec": 0, 00:20:37.321 "disable_auto_failback": false, 00:20:37.321 "generate_uuids": false, 00:20:37.321 "transport_tos": 0, 00:20:37.321 "nvme_error_stat": false, 00:20:37.321 "rdma_srq_size": 0, 00:20:37.321 "io_path_stat": false, 00:20:37.321 "allow_accel_sequence": false, 00:20:37.321 "rdma_max_cq_size": 0, 00:20:37.321 "rdma_cm_event_timeout_ms": 0, 00:20:37.321 "dhchap_digests": [ 00:20:37.321 "sha256", 00:20:37.321 "sha384", 00:20:37.321 "sha512" 00:20:37.321 ], 00:20:37.321 "dhchap_dhgroups": [ 00:20:37.321 "null", 00:20:37.321 "ffdhe2048", 00:20:37.321 "ffdhe3072", 00:20:37.321 "ffdhe4096", 00:20:37.321 "ffdhe6144", 00:20:37.321 "ffdhe8192" 00:20:37.321 ] 00:20:37.321 } 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "method": "bdev_nvme_set_hotplug", 00:20:37.321 "params": { 00:20:37.321 "period_us": 100000, 00:20:37.321 "enable": false 00:20:37.321 } 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "method": "bdev_malloc_create", 00:20:37.321 "params": { 00:20:37.321 "name": "malloc0", 00:20:37.321 "num_blocks": 8192, 00:20:37.321 "block_size": 4096, 00:20:37.321 "physical_block_size": 4096, 00:20:37.321 "uuid": "bef3d17b-648a-41a7-bb4f-571cef4cf074", 00:20:37.321 "optimal_io_boundary": 0, 00:20:37.321 "md_size": 0, 00:20:37.321 "dif_type": 0, 00:20:37.321 "dif_is_head_of_md": false, 00:20:37.321 "dif_pi_format": 0 00:20:37.321 } 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "method": "bdev_wait_for_examine" 00:20:37.321 } 00:20:37.321 ] 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "subsystem": "nbd", 00:20:37.321 "config": [] 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "subsystem": "scheduler", 00:20:37.321 "config": [ 00:20:37.321 { 00:20:37.321 "method": "framework_set_scheduler", 00:20:37.321 "params": { 00:20:37.321 "name": "static" 00:20:37.321 } 00:20:37.321 } 00:20:37.321 ] 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "subsystem": "nvmf", 00:20:37.321 "config": [ 00:20:37.321 { 00:20:37.321 "method": "nvmf_set_config", 00:20:37.321 "params": { 00:20:37.321 "discovery_filter": "match_any", 00:20:37.321 "admin_cmd_passthru": { 00:20:37.321 "identify_ctrlr": false 00:20:37.321 }, 00:20:37.321 "dhchap_digests": [ 00:20:37.321 "sha256", 00:20:37.321 "sha384", 00:20:37.321 "sha512" 00:20:37.321 ], 00:20:37.321 "dhchap_dhgroups": [ 00:20:37.321 "null", 00:20:37.321 "ffdhe2048", 00:20:37.321 "ffdhe3072", 00:20:37.321 "ffdhe4096", 00:20:37.321 "ffdhe6144", 00:20:37.321 "ffdhe8192" 00:20:37.321 ] 00:20:37.321 } 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "method": "nvmf_set_max_subsystems", 00:20:37.321 "params": { 00:20:37.321 "max_subsystems": 1024 00:20:37.321 } 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "method": "nvmf_set_crdt", 00:20:37.321 "params": { 00:20:37.321 "crdt1": 0, 00:20:37.321 "crdt2": 0, 00:20:37.321 "crdt3": 0 00:20:37.321 } 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "method": "nvmf_create_transport", 00:20:37.321 "params": { 00:20:37.321 "trtype": "TCP", 00:20:37.321 "max_queue_depth": 128, 00:20:37.321 "max_io_qpairs_per_ctrlr": 127, 00:20:37.321 "in_capsule_data_size": 4096, 00:20:37.321 "max_io_size": 131072, 00:20:37.321 "io_unit_size": 131072, 00:20:37.321 "max_aq_depth": 128, 00:20:37.321 "num_shared_buffers": 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:37.321 511, 00:20:37.321 "buf_cache_size": 4294967295, 00:20:37.321 "dif_insert_or_strip": false, 00:20:37.321 "zcopy": false, 00:20:37.321 "c2h_success": false, 00:20:37.321 "sock_priority": 0, 00:20:37.321 "abort_timeout_sec": 1, 00:20:37.321 "ack_timeout": 0, 00:20:37.321 "data_wr_pool_size": 0 00:20:37.321 } 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "method": "nvmf_create_subsystem", 00:20:37.321 "params": { 00:20:37.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.321 "allow_any_host": false, 00:20:37.321 "serial_number": "00000000000000000000", 00:20:37.321 "model_number": "SPDK bdev Controller", 00:20:37.321 "max_namespaces": 32, 00:20:37.321 "min_cntlid": 1, 00:20:37.321 "max_cntlid": 65519, 00:20:37.321 "ana_reporting": false 00:20:37.321 } 00:20:37.321 }, 00:20:37.321 { 00:20:37.321 "method": "nvmf_subsystem_add_host", 00:20:37.322 "params": { 00:20:37.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.322 "host": "nqn.2016-06.io.spdk:host1", 00:20:37.322 "psk": "key0" 00:20:37.322 } 00:20:37.322 }, 00:20:37.322 { 00:20:37.322 "method": "nvmf_subsystem_add_ns", 00:20:37.322 "params": { 00:20:37.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.322 "namespace": { 00:20:37.322 "nsid": 1, 00:20:37.322 "bdev_name": "malloc0", 00:20:37.322 "nguid": "BEF3D17B648A41A7BB4F571CEF4CF074", 00:20:37.322 "uuid": "bef3d17b-648a-41a7-bb4f-571cef4cf074", 00:20:37.322 "no_auto_visible": false 00:20:37.322 } 00:20:37.322 } 00:20:37.322 }, 00:20:37.322 { 00:20:37.322 "method": "nvmf_subsystem_add_listener", 00:20:37.322 "params": { 00:20:37.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.322 "listen_address": { 00:20:37.322 "trtype": "TCP", 00:20:37.322 "adrfam": "IPv4", 00:20:37.322 "traddr": "10.0.0.2", 00:20:37.322 "trsvcid": "4420" 00:20:37.322 }, 00:20:37.322 "secure_channel": false, 00:20:37.322 "sock_impl": "ssl" 00:20:37.322 } 00:20:37.322 } 00:20:37.322 ] 00:20:37.322 } 00:20:37.322 ] 00:20:37.322 }' 00:20:37.322 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.580 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2463304 00:20:37.580 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2463304 00:20:37.580 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:37.580 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2463304 ']' 00:20:37.580 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.580 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.580 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.580 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.581 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.581 [2024-10-01 15:54:47.563961] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:37.581 [2024-10-01 15:54:47.564008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.581 [2024-10-01 15:54:47.633617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.581 [2024-10-01 15:54:47.709945] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.581 [2024-10-01 15:54:47.709979] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.581 [2024-10-01 15:54:47.709989] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.581 [2024-10-01 15:54:47.709996] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.581 [2024-10-01 15:54:47.710001] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.581 [2024-10-01 15:54:47.710047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.839 [2024-10-01 15:54:47.933310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.839 [2024-10-01 15:54:47.965344] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:37.839 [2024-10-01 15:54:47.965560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.407 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.407 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:38.407 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:38.407 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:38.407 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.407 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.407 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2463337 00:20:38.407 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2463337 /var/tmp/bdevperf.sock 00:20:38.407 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2463337 ']' 00:20:38.407 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.407 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:38.407 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:38.407 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.407 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:38.407 "subsystems": [ 00:20:38.407 { 00:20:38.407 "subsystem": "keyring", 00:20:38.407 "config": [ 00:20:38.407 { 00:20:38.407 "method": "keyring_file_add_key", 00:20:38.407 "params": { 00:20:38.408 "name": "key0", 00:20:38.408 "path": "/tmp/tmp.ktOg8wkXvb" 00:20:38.408 } 00:20:38.408 } 00:20:38.408 ] 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "subsystem": "iobuf", 00:20:38.408 "config": [ 00:20:38.408 { 00:20:38.408 "method": "iobuf_set_options", 00:20:38.408 "params": { 00:20:38.408 "small_pool_count": 8192, 00:20:38.408 "large_pool_count": 1024, 00:20:38.408 "small_bufsize": 8192, 00:20:38.408 "large_bufsize": 135168 00:20:38.408 } 00:20:38.408 } 00:20:38.408 ] 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "subsystem": "sock", 00:20:38.408 "config": [ 00:20:38.408 { 00:20:38.408 "method": "sock_set_default_impl", 00:20:38.408 "params": { 00:20:38.408 "impl_name": "posix" 00:20:38.408 } 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "method": "sock_impl_set_options", 00:20:38.408 "params": { 00:20:38.408 "impl_name": "ssl", 00:20:38.408 "recv_buf_size": 4096, 00:20:38.408 "send_buf_size": 4096, 00:20:38.408 "enable_recv_pipe": true, 00:20:38.408 "enable_quickack": false, 00:20:38.408 "enable_placement_id": 0, 00:20:38.408 "enable_zerocopy_send_server": true, 00:20:38.408 "enable_zerocopy_send_client": false, 00:20:38.408 "zerocopy_threshold": 0, 00:20:38.408 "tls_version": 0, 00:20:38.408 "enable_ktls": false 00:20:38.408 } 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "method": "sock_impl_set_options", 00:20:38.408 "params": { 00:20:38.408 "impl_name": "posix", 00:20:38.408 "recv_buf_size": 2097152, 00:20:38.408 "send_buf_size": 2097152, 00:20:38.408 "enable_recv_pipe": true, 00:20:38.408 "enable_quickack": false, 00:20:38.408 "enable_placement_id": 0, 00:20:38.408 "enable_zerocopy_send_server": true, 00:20:38.408 "enable_zerocopy_send_client": false, 00:20:38.408 "zerocopy_threshold": 0, 00:20:38.408 "tls_version": 0, 00:20:38.408 "enable_ktls": false 00:20:38.408 } 00:20:38.408 } 00:20:38.408 ] 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "subsystem": "vmd", 00:20:38.408 "config": [] 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "subsystem": "accel", 00:20:38.408 "config": [ 00:20:38.408 { 00:20:38.408 "method": "accel_set_options", 00:20:38.408 "params": { 00:20:38.408 "small_cache_size": 128, 00:20:38.408 "large_cache_size": 16, 00:20:38.408 "task_count": 2048, 00:20:38.408 "sequence_count": 2048, 00:20:38.408 "buf_count": 2048 00:20:38.408 } 00:20:38.408 } 00:20:38.408 ] 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "subsystem": "bdev", 00:20:38.408 "config": [ 00:20:38.408 { 00:20:38.408 "method": "bdev_set_options", 00:20:38.408 "params": { 00:20:38.408 "bdev_io_pool_size": 65535, 00:20:38.408 "bdev_io_cache_size": 256, 00:20:38.408 "bdev_auto_examine": true, 00:20:38.408 "iobuf_small_cache_size": 128, 00:20:38.408 "iobuf_large_cache_size": 16 00:20:38.408 } 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "method": "bdev_raid_set_options", 00:20:38.408 "params": { 00:20:38.408 "process_window_size_kb": 1024, 00:20:38.408 "process_max_bandwidth_mb_sec": 0 00:20:38.408 } 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "method": "bdev_iscsi_set_options", 00:20:38.408 "params": { 00:20:38.408 "timeout_sec": 30 00:20:38.408 } 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "method": "bdev_nvme_set_options", 00:20:38.408 "params": { 00:20:38.408 "action_on_timeout": "none", 00:20:38.408 "timeout_us": 0, 00:20:38.408 "timeout_admin_us": 0, 00:20:38.408 "keep_alive_timeout_ms": 10000, 00:20:38.408 "arbitration_burst": 0, 00:20:38.408 "low_priority_weight": 0, 00:20:38.408 "medium_priority_weight": 0, 00:20:38.408 "high_priority_weight": 0, 00:20:38.408 "nvme_adminq_poll_period_us": 10000, 00:20:38.408 "nvme_ioq_poll_period_us": 0, 00:20:38.408 "io_queue_requests": 512, 00:20:38.408 "delay_cmd_submit": true, 00:20:38.408 "transport_retry_count": 4, 00:20:38.408 "bdev_retry_count": 3, 00:20:38.408 "transport_ack_timeout": 0, 00:20:38.408 "ctrlr_loss_timeout_sec": 0, 00:20:38.408 "reconnect_delay_sec": 0, 00:20:38.408 "fast_io_fail_timeout_sec": 0, 00:20:38.408 "disable_auto_failback": false, 00:20:38.408 "generate_uuids": false, 00:20:38.408 "transport_tos": 0, 00:20:38.408 "nvme_error_stat": false, 00:20:38.408 "rdma_srq_size": 0, 00:20:38.408 "io_path_stat": false, 00:20:38.408 "allow_accel_sequence": false, 00:20:38.408 "rdma_max_cq_size": 0, 00:20:38.408 "rdma_cm_event_timeout_ms": 0, 00:20:38.408 "dhchap_digests": [ 00:20:38.408 "sha256", 00:20:38.408 "sha384", 00:20:38.408 "sha512" 00:20:38.408 ], 00:20:38.408 "dhchap_dhgroups": [ 00:20:38.408 "null", 00:20:38.408 "ffdhe2048", 00:20:38.408 "ffdhe3072", 00:20:38.408 "ffdhe4096", 00:20:38.408 "ffdhe6144", 00:20:38.408 "ffdhe8192" 00:20:38.408 ] 00:20:38.408 } 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "method": "bdev_nvme_attach_controller", 00:20:38.408 "params": { 00:20:38.408 "name": "nvme0", 00:20:38.408 "trtype": "TCP", 00:20:38.408 "adrfam": "IPv4", 00:20:38.408 "traddr": "10.0.0.2", 00:20:38.408 "trsvcid": "4420", 00:20:38.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.408 "prchk_reftag": false, 00:20:38.408 "prchk_guard": false, 00:20:38.408 "ctrlr_loss_timeout_sec": 0, 00:20:38.408 "reconnect_delay_sec": 0, 00:20:38.408 "fast_io_fail_timeout_sec": 0, 00:20:38.408 "psk": "key0", 00:20:38.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.408 "hdgst": false, 00:20:38.408 "ddgst": false, 00:20:38.408 "multipath": "multipath" 00:20:38.408 } 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "method": "bdev_nvme_set_hotplug", 00:20:38.408 "params": { 00:20:38.408 "period_us": 100000, 00:20:38.408 "enable": false 00:20:38.408 } 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "method": "bdev_enable_histogram", 00:20:38.408 "params": { 00:20:38.408 "name": "nvme0n1", 00:20:38.408 "enable": true 00:20:38.408 } 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "method": "bdev_wait_for_examine" 00:20:38.408 } 00:20:38.408 ] 00:20:38.408 }, 00:20:38.408 { 00:20:38.408 "subsystem": "nbd", 00:20:38.408 "config": [] 00:20:38.408 } 00:20:38.408 ] 00:20:38.408 }' 00:20:38.408 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:38.408 15:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.409 [2024-10-01 15:54:48.470232] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:38.409 [2024-10-01 15:54:48.470281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463337 ] 00:20:38.409 [2024-10-01 15:54:48.538660] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.668 [2024-10-01 15:54:48.611942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.668 [2024-10-01 15:54:48.764763] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:39.233 15:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:39.233 15:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:39.233 15:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:39.233 15:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:39.492 15:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.492 15:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:39.492 Running I/O for 1 seconds... 00:20:40.427 5222.00 IOPS, 20.40 MiB/s 00:20:40.427 Latency(us) 00:20:40.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.427 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:40.427 Verification LBA range: start 0x0 length 0x2000 00:20:40.427 nvme0n1 : 1.03 5219.63 20.39 0.00 0.00 24283.71 5305.30 24591.60 00:20:40.427 =================================================================================================================== 00:20:40.427 Total : 5219.63 20.39 0.00 0.00 24283.71 5305.30 24591.60 00:20:40.427 { 00:20:40.427 "results": [ 00:20:40.427 { 00:20:40.427 "job": "nvme0n1", 00:20:40.427 "core_mask": "0x2", 00:20:40.427 "workload": "verify", 00:20:40.427 "status": "finished", 00:20:40.427 "verify_range": { 00:20:40.427 "start": 0, 00:20:40.427 "length": 8192 00:20:40.427 }, 00:20:40.427 "queue_depth": 128, 00:20:40.427 "io_size": 4096, 00:20:40.427 "runtime": 1.025169, 00:20:40.427 "iops": 5219.627202929468, 00:20:40.427 "mibps": 20.389168761443234, 00:20:40.427 "io_failed": 0, 00:20:40.427 "io_timeout": 0, 00:20:40.427 "avg_latency_us": 24283.712157763126, 00:20:40.427 "min_latency_us": 5305.295238095238, 00:20:40.427 "max_latency_us": 24591.60380952381 00:20:40.427 } 00:20:40.427 ], 00:20:40.427 "core_count": 1 00:20:40.427 } 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:40.687 nvmf_trace.0 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2463337 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2463337 ']' 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2463337 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2463337 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2463337' 00:20:40.687 killing process with pid 2463337 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2463337 00:20:40.687 Received shutdown signal, test time was about 1.000000 seconds 00:20:40.687 00:20:40.687 Latency(us) 00:20:40.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.687 =================================================================================================================== 00:20:40.687 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.687 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2463337 00:20:40.946 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:40.946 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:40.946 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:40.946 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:40.946 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:40.946 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:40.946 15:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:40.946 rmmod nvme_tcp 00:20:40.946 rmmod nvme_fabrics 00:20:40.946 rmmod nvme_keyring 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 2463304 ']' 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 2463304 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2463304 ']' 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2463304 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2463304 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2463304' 00:20:40.946 killing process with pid 2463304 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2463304 00:20:40.946 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2463304 00:20:41.206 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:41.206 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:41.206 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:41.206 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:41.206 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:20:41.206 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:41.206 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:20:41.206 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:41.206 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:41.206 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.206 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.206 15:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Iegb3MmDVX /tmp/tmp.o5N4bMdMSa /tmp/tmp.ktOg8wkXvb 00:20:43.743 00:20:43.743 real 1m29.994s 00:20:43.743 user 2m20.558s 00:20:43.743 sys 0m31.138s 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.743 ************************************ 00:20:43.743 END TEST nvmf_tls 00:20:43.743 ************************************ 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:43.743 ************************************ 00:20:43.743 START TEST nvmf_fips 00:20:43.743 ************************************ 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:43.743 * Looking for test storage... 00:20:43.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:43.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.743 --rc genhtml_branch_coverage=1 00:20:43.743 --rc genhtml_function_coverage=1 00:20:43.743 --rc genhtml_legend=1 00:20:43.743 --rc geninfo_all_blocks=1 00:20:43.743 --rc geninfo_unexecuted_blocks=1 00:20:43.743 00:20:43.743 ' 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:43.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.743 --rc genhtml_branch_coverage=1 00:20:43.743 --rc genhtml_function_coverage=1 00:20:43.743 --rc genhtml_legend=1 00:20:43.743 --rc geninfo_all_blocks=1 00:20:43.743 --rc geninfo_unexecuted_blocks=1 00:20:43.743 00:20:43.743 ' 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:43.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.743 --rc genhtml_branch_coverage=1 00:20:43.743 --rc genhtml_function_coverage=1 00:20:43.743 --rc genhtml_legend=1 00:20:43.743 --rc geninfo_all_blocks=1 00:20:43.743 --rc geninfo_unexecuted_blocks=1 00:20:43.743 00:20:43.743 ' 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:43.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.743 --rc genhtml_branch_coverage=1 00:20:43.743 --rc genhtml_function_coverage=1 00:20:43.743 --rc genhtml_legend=1 00:20:43.743 --rc geninfo_all_blocks=1 00:20:43.743 --rc geninfo_unexecuted_blocks=1 00:20:43.743 00:20:43.743 ' 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.743 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:43.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:43.744 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:43.745 Error setting digest 00:20:43.745 40F20808ED7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:43.745 40F20808ED7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:43.745 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:50.313 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:50.313 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:50.313 Found net devices under 0000:86:00.0: cvl_0_0 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:50.313 Found net devices under 0000:86:00.1: cvl_0_1 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:50.313 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:50.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:20:50.313 00:20:50.313 --- 10.0.0.2 ping statistics --- 00:20:50.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.314 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:20:50.314 00:20:50.314 --- 10.0.0.1 ping statistics --- 00:20:50.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.314 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=2467350 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 2467350 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2467350 ']' 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:50.314 15:54:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:50.314 [2024-10-01 15:54:59.792125] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:50.314 [2024-10-01 15:54:59.792171] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.314 [2024-10-01 15:54:59.864437] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.314 [2024-10-01 15:54:59.935332] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.314 [2024-10-01 15:54:59.935374] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.314 [2024-10-01 15:54:59.935384] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.314 [2024-10-01 15:54:59.935390] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.314 [2024-10-01 15:54:59.935395] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.314 [2024-10-01 15:54:59.935418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.lqN 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.lqN 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.lqN 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.lqN 00:20:50.573 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:50.831 [2024-10-01 15:55:00.833296] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.831 [2024-10-01 15:55:00.849303] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:50.831 [2024-10-01 15:55:00.849507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.831 malloc0 00:20:50.831 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:50.831 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2467599 00:20:50.831 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:50.831 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2467599 /var/tmp/bdevperf.sock 00:20:50.831 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2467599 ']' 00:20:50.831 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.832 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:50.832 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.832 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:50.832 15:55:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:50.832 [2024-10-01 15:55:00.994326] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:20:50.832 [2024-10-01 15:55:00.994378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2467599 ] 00:20:51.091 [2024-10-01 15:55:01.060074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.091 [2024-10-01 15:55:01.132949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.658 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:51.658 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:51.658 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.lqN 00:20:51.917 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:52.176 [2024-10-01 15:55:02.180191] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.176 TLSTESTn1 00:20:52.176 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:52.176 Running I/O for 10 seconds... 00:21:02.534 5528.00 IOPS, 21.59 MiB/s 5574.00 IOPS, 21.77 MiB/s 5555.33 IOPS, 21.70 MiB/s 5374.75 IOPS, 21.00 MiB/s 5305.60 IOPS, 20.73 MiB/s 5276.17 IOPS, 20.61 MiB/s 5159.14 IOPS, 20.15 MiB/s 5118.00 IOPS, 19.99 MiB/s 5105.33 IOPS, 19.94 MiB/s 5093.60 IOPS, 19.90 MiB/s 00:21:02.534 Latency(us) 00:21:02.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.534 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:02.534 Verification LBA range: start 0x0 length 0x2000 00:21:02.535 TLSTESTn1 : 10.02 5097.83 19.91 0.00 0.00 25073.19 7146.54 31457.28 00:21:02.535 =================================================================================================================== 00:21:02.535 Total : 5097.83 19.91 0.00 0.00 25073.19 7146.54 31457.28 00:21:02.535 { 00:21:02.535 "results": [ 00:21:02.535 { 00:21:02.535 "job": "TLSTESTn1", 00:21:02.535 "core_mask": "0x4", 00:21:02.535 "workload": "verify", 00:21:02.535 "status": "finished", 00:21:02.535 "verify_range": { 00:21:02.535 "start": 0, 00:21:02.535 "length": 8192 00:21:02.535 }, 00:21:02.535 "queue_depth": 128, 00:21:02.535 "io_size": 4096, 00:21:02.535 "runtime": 10.016815, 00:21:02.535 "iops": 5097.828002214276, 00:21:02.535 "mibps": 19.913390633649517, 00:21:02.535 "io_failed": 0, 00:21:02.535 "io_timeout": 0, 00:21:02.535 "avg_latency_us": 25073.190223435762, 00:21:02.535 "min_latency_us": 7146.544761904762, 00:21:02.535 "max_latency_us": 31457.28 00:21:02.535 } 00:21:02.535 ], 00:21:02.535 "core_count": 1 00:21:02.535 } 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:02.535 nvmf_trace.0 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2467599 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2467599 ']' 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2467599 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2467599 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2467599' 00:21:02.535 killing process with pid 2467599 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2467599 00:21:02.535 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.535 00:21:02.535 Latency(us) 00:21:02.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.535 =================================================================================================================== 00:21:02.535 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.535 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2467599 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:02.794 rmmod nvme_tcp 00:21:02.794 rmmod nvme_fabrics 00:21:02.794 rmmod nvme_keyring 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 2467350 ']' 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 2467350 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2467350 ']' 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2467350 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2467350 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2467350' 00:21:02.794 killing process with pid 2467350 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2467350 00:21:02.794 15:55:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2467350 00:21:03.054 15:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:03.054 15:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:03.054 15:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:03.054 15:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:03.054 15:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:21:03.054 15:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:03.054 15:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:21:03.054 15:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.054 15:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:03.054 15:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.054 15:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.054 15:55:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.955 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:04.955 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.lqN 00:21:04.955 00:21:04.955 real 0m21.731s 00:21:04.955 user 0m22.809s 00:21:04.955 sys 0m10.348s 00:21:04.955 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:04.955 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:04.955 ************************************ 00:21:04.955 END TEST nvmf_fips 00:21:04.955 ************************************ 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:05.213 ************************************ 00:21:05.213 START TEST nvmf_control_msg_list 00:21:05.213 ************************************ 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:05.213 * Looking for test storage... 00:21:05.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.213 --rc genhtml_branch_coverage=1 00:21:05.213 --rc genhtml_function_coverage=1 00:21:05.213 --rc genhtml_legend=1 00:21:05.213 --rc geninfo_all_blocks=1 00:21:05.213 --rc geninfo_unexecuted_blocks=1 00:21:05.213 00:21:05.213 ' 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.213 --rc genhtml_branch_coverage=1 00:21:05.213 --rc genhtml_function_coverage=1 00:21:05.213 --rc genhtml_legend=1 00:21:05.213 --rc geninfo_all_blocks=1 00:21:05.213 --rc geninfo_unexecuted_blocks=1 00:21:05.213 00:21:05.213 ' 00:21:05.213 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.213 --rc genhtml_branch_coverage=1 00:21:05.213 --rc genhtml_function_coverage=1 00:21:05.213 --rc genhtml_legend=1 00:21:05.213 --rc geninfo_all_blocks=1 00:21:05.213 --rc geninfo_unexecuted_blocks=1 00:21:05.214 00:21:05.214 ' 00:21:05.214 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:05.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.214 --rc genhtml_branch_coverage=1 00:21:05.214 --rc genhtml_function_coverage=1 00:21:05.214 --rc genhtml_legend=1 00:21:05.214 --rc geninfo_all_blocks=1 00:21:05.214 --rc geninfo_unexecuted_blocks=1 00:21:05.214 00:21:05.214 ' 00:21:05.214 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.214 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:05.214 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.214 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.214 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.214 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.214 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.214 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.214 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.214 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.214 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.214 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.472 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:05.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.473 15:55:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:12.043 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:12.043 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:12.043 Found net devices under 0000:86:00.0: cvl_0_0 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:12.043 Found net devices under 0000:86:00.1: cvl_0_1 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:12.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:21:12.043 00:21:12.043 --- 10.0.0.2 ping statistics --- 00:21:12.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.043 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:21:12.043 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:21:12.043 00:21:12.043 --- 10.0.0.1 ping statistics --- 00:21:12.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.044 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=2472997 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 2472997 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 2472997 ']' 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:12.044 15:55:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.044 [2024-10-01 15:55:21.433075] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:21:12.044 [2024-10-01 15:55:21.433127] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.044 [2024-10-01 15:55:21.505251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.044 [2024-10-01 15:55:21.589388] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.044 [2024-10-01 15:55:21.589419] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.044 [2024-10-01 15:55:21.589427] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.044 [2024-10-01 15:55:21.589433] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.044 [2024-10-01 15:55:21.589438] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.044 [2024-10-01 15:55:21.589456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.303 [2024-10-01 15:55:22.311000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.303 Malloc0 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.303 [2024-10-01 15:55:22.374076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2473225 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2473226 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2473227 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2473225 00:21:12.303 15:55:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.303 [2024-10-01 15:55:22.444371] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:12.303 [2024-10-01 15:55:22.454445] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:12.303 [2024-10-01 15:55:22.454606] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:13.679 Initializing NVMe Controllers 00:21:13.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:13.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:13.679 Initialization complete. Launching workers. 00:21:13.679 ======================================================== 00:21:13.679 Latency(us) 00:21:13.679 Device Information : IOPS MiB/s Average min max 00:21:13.679 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40924.38 40542.27 41899.03 00:21:13.679 ======================================================== 00:21:13.679 Total : 25.00 0.10 40924.38 40542.27 41899.03 00:21:13.679 00:21:13.679 Initializing NVMe Controllers 00:21:13.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:13.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:13.679 Initialization complete. Launching workers. 00:21:13.679 ======================================================== 00:21:13.679 Latency(us) 00:21:13.679 Device Information : IOPS MiB/s Average min max 00:21:13.679 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6980.00 27.27 142.92 129.61 346.12 00:21:13.679 ======================================================== 00:21:13.679 Total : 6980.00 27.27 142.92 129.61 346.12 00:21:13.679 00:21:13.679 Initializing NVMe Controllers 00:21:13.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:13.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:13.679 Initialization complete. Launching workers. 00:21:13.679 ======================================================== 00:21:13.679 Latency(us) 00:21:13.679 Device Information : IOPS MiB/s Average min max 00:21:13.679 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 44.00 0.17 23334.85 204.62 41879.80 00:21:13.679 ======================================================== 00:21:13.679 Total : 44.00 0.17 23334.85 204.62 41879.80 00:21:13.679 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2473226 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2473227 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:13.679 rmmod nvme_tcp 00:21:13.679 rmmod nvme_fabrics 00:21:13.679 rmmod nvme_keyring 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 2472997 ']' 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 2472997 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 2472997 ']' 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 2472997 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2472997 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2472997' 00:21:13.679 killing process with pid 2472997 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 2472997 00:21:13.679 15:55:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 2472997 00:21:13.937 15:55:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:13.937 15:55:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:13.937 15:55:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:13.937 15:55:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:13.937 15:55:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:21:13.937 15:55:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:13.937 15:55:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:21:13.937 15:55:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:13.937 15:55:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:13.937 15:55:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.937 15:55:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.937 15:55:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.469 00:21:16.469 real 0m10.874s 00:21:16.469 user 0m7.540s 00:21:16.469 sys 0m5.571s 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.469 ************************************ 00:21:16.469 END TEST nvmf_control_msg_list 00:21:16.469 ************************************ 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:16.469 ************************************ 00:21:16.469 START TEST nvmf_wait_for_buf 00:21:16.469 ************************************ 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:16.469 * Looking for test storage... 00:21:16.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:16.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.469 --rc genhtml_branch_coverage=1 00:21:16.469 --rc genhtml_function_coverage=1 00:21:16.469 --rc genhtml_legend=1 00:21:16.469 --rc geninfo_all_blocks=1 00:21:16.469 --rc geninfo_unexecuted_blocks=1 00:21:16.469 00:21:16.469 ' 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:16.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.469 --rc genhtml_branch_coverage=1 00:21:16.469 --rc genhtml_function_coverage=1 00:21:16.469 --rc genhtml_legend=1 00:21:16.469 --rc geninfo_all_blocks=1 00:21:16.469 --rc geninfo_unexecuted_blocks=1 00:21:16.469 00:21:16.469 ' 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:16.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.469 --rc genhtml_branch_coverage=1 00:21:16.469 --rc genhtml_function_coverage=1 00:21:16.469 --rc genhtml_legend=1 00:21:16.469 --rc geninfo_all_blocks=1 00:21:16.469 --rc geninfo_unexecuted_blocks=1 00:21:16.469 00:21:16.469 ' 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:16.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.469 --rc genhtml_branch_coverage=1 00:21:16.469 --rc genhtml_function_coverage=1 00:21:16.469 --rc genhtml_legend=1 00:21:16.469 --rc geninfo_all_blocks=1 00:21:16.469 --rc geninfo_unexecuted_blocks=1 00:21:16.469 00:21:16.469 ' 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.469 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.470 15:55:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:23.042 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:23.042 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:23.042 15:55:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:23.042 Found net devices under 0000:86:00.0: cvl_0_0 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:23.042 Found net devices under 0000:86:00.1: cvl_0_1 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.042 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:23.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:21:23.043 00:21:23.043 --- 10.0.0.2 ping statistics --- 00:21:23.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.043 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:21:23.043 00:21:23.043 --- 10.0.0.1 ping statistics --- 00:21:23.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.043 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=2476987 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 2476987 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 2476987 ']' 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:23.043 15:55:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.043 [2024-10-01 15:55:32.367586] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:21:23.043 [2024-10-01 15:55:32.367637] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.043 [2024-10-01 15:55:32.441902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.043 [2024-10-01 15:55:32.514895] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.043 [2024-10-01 15:55:32.514932] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.043 [2024-10-01 15:55:32.514940] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.043 [2024-10-01 15:55:32.514946] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.043 [2024-10-01 15:55:32.514951] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.043 [2024-10-01 15:55:32.514970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.043 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.043 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:21:23.043 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:23.043 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:23.043 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.302 Malloc0 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.302 [2024-10-01 15:55:33.349805] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:23.302 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.303 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.303 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.303 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:23.303 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.303 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.303 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.303 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:23.303 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.303 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.303 [2024-10-01 15:55:33.374014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.303 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.303 15:55:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:23.303 [2024-10-01 15:55:33.447940] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:24.681 Initializing NVMe Controllers 00:21:24.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:24.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:24.681 Initialization complete. Launching workers. 00:21:24.681 ======================================================== 00:21:24.681 Latency(us) 00:21:24.681 Device Information : IOPS MiB/s Average min max 00:21:24.681 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 32.98 4.12 124084.89 7260.40 194471.34 00:21:24.681 ======================================================== 00:21:24.681 Total : 32.98 4.12 124084.89 7260.40 194471.34 00:21:24.681 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=502 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 502 -eq 0 ]] 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:24.681 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:24.681 rmmod nvme_tcp 00:21:24.939 rmmod nvme_fabrics 00:21:24.939 rmmod nvme_keyring 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 2476987 ']' 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 2476987 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 2476987 ']' 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 2476987 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2476987 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2476987' 00:21:24.939 killing process with pid 2476987 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 2476987 00:21:24.939 15:55:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 2476987 00:21:25.198 15:55:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:25.198 15:55:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:25.198 15:55:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:25.198 15:55:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:25.198 15:55:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:21:25.198 15:55:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:25.198 15:55:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:21:25.198 15:55:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.198 15:55:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:25.198 15:55:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.198 15:55:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.198 15:55:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.107 15:55:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:27.107 00:21:27.107 real 0m11.061s 00:21:27.107 user 0m4.763s 00:21:27.107 sys 0m4.922s 00:21:27.107 15:55:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:27.107 15:55:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:27.107 ************************************ 00:21:27.107 END TEST nvmf_wait_for_buf 00:21:27.107 ************************************ 00:21:27.107 15:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:27.107 15:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:27.107 15:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:27.107 15:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:27.107 15:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:27.107 15:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:33.677 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:33.677 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:33.677 Found net devices under 0000:86:00.0: cvl_0_0 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:33.677 Found net devices under 0000:86:00.1: cvl_0_1 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:33.677 15:55:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:33.677 ************************************ 00:21:33.677 START TEST nvmf_perf_adq 00:21:33.677 ************************************ 00:21:33.678 15:55:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:33.678 * Looking for test storage... 00:21:33.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:33.678 15:55:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:33.678 15:55:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:21:33.678 15:55:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:33.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.678 --rc genhtml_branch_coverage=1 00:21:33.678 --rc genhtml_function_coverage=1 00:21:33.678 --rc genhtml_legend=1 00:21:33.678 --rc geninfo_all_blocks=1 00:21:33.678 --rc geninfo_unexecuted_blocks=1 00:21:33.678 00:21:33.678 ' 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:33.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.678 --rc genhtml_branch_coverage=1 00:21:33.678 --rc genhtml_function_coverage=1 00:21:33.678 --rc genhtml_legend=1 00:21:33.678 --rc geninfo_all_blocks=1 00:21:33.678 --rc geninfo_unexecuted_blocks=1 00:21:33.678 00:21:33.678 ' 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:33.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.678 --rc genhtml_branch_coverage=1 00:21:33.678 --rc genhtml_function_coverage=1 00:21:33.678 --rc genhtml_legend=1 00:21:33.678 --rc geninfo_all_blocks=1 00:21:33.678 --rc geninfo_unexecuted_blocks=1 00:21:33.678 00:21:33.678 ' 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:33.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.678 --rc genhtml_branch_coverage=1 00:21:33.678 --rc genhtml_function_coverage=1 00:21:33.678 --rc genhtml_legend=1 00:21:33.678 --rc geninfo_all_blocks=1 00:21:33.678 --rc geninfo_unexecuted_blocks=1 00:21:33.678 00:21:33.678 ' 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:33.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:33.678 15:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:38.951 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:38.951 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:38.951 Found net devices under 0000:86:00.0: cvl_0_0 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:38.951 Found net devices under 0000:86:00.1: cvl_0_1 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:38.951 15:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:39.889 15:55:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:41.795 15:55:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:47.065 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:47.066 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:47.066 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:47.066 Found net devices under 0000:86:00.0: cvl_0_0 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:47.066 Found net devices under 0000:86:00.1: cvl_0_1 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:47.066 15:55:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:47.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:21:47.066 00:21:47.066 --- 10.0.0.2 ping statistics --- 00:21:47.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.066 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:21:47.066 00:21:47.066 --- 10.0.0.1 ping statistics --- 00:21:47.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.066 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=2485358 00:21:47.066 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 2485358 00:21:47.067 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:47.067 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2485358 ']' 00:21:47.067 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.067 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.067 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.067 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.067 15:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.067 [2024-10-01 15:55:57.182933] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:21:47.067 [2024-10-01 15:55:57.182980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.067 [2024-10-01 15:55:57.255064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.326 [2024-10-01 15:55:57.336049] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.326 [2024-10-01 15:55:57.336087] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.326 [2024-10-01 15:55:57.336095] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.326 [2024-10-01 15:55:57.336101] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.326 [2024-10-01 15:55:57.336106] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.326 [2024-10-01 15:55:57.336162] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.326 [2024-10-01 15:55:57.336269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.326 [2024-10-01 15:55:57.336375] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.326 [2024-10-01 15:55:57.336376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.891 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.891 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:47.891 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:47.891 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:47.891 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.891 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.891 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:47.891 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:47.891 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:47.891 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.891 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.891 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.149 [2024-10-01 15:55:58.215587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.149 Malloc1 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.149 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.150 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.150 [2024-10-01 15:55:58.267114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.150 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.150 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2485582 00:21:48.150 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:48.150 15:55:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:50.680 15:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:50.680 15:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.680 15:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.680 15:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.680 15:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:50.680 "tick_rate": 2100000000, 00:21:50.680 "poll_groups": [ 00:21:50.680 { 00:21:50.680 "name": "nvmf_tgt_poll_group_000", 00:21:50.680 "admin_qpairs": 1, 00:21:50.680 "io_qpairs": 1, 00:21:50.680 "current_admin_qpairs": 1, 00:21:50.680 "current_io_qpairs": 1, 00:21:50.680 "pending_bdev_io": 0, 00:21:50.680 "completed_nvme_io": 19673, 00:21:50.680 "transports": [ 00:21:50.680 { 00:21:50.680 "trtype": "TCP" 00:21:50.680 } 00:21:50.680 ] 00:21:50.680 }, 00:21:50.680 { 00:21:50.680 "name": "nvmf_tgt_poll_group_001", 00:21:50.680 "admin_qpairs": 0, 00:21:50.680 "io_qpairs": 1, 00:21:50.680 "current_admin_qpairs": 0, 00:21:50.680 "current_io_qpairs": 1, 00:21:50.680 "pending_bdev_io": 0, 00:21:50.680 "completed_nvme_io": 20144, 00:21:50.680 "transports": [ 00:21:50.680 { 00:21:50.680 "trtype": "TCP" 00:21:50.680 } 00:21:50.680 ] 00:21:50.680 }, 00:21:50.680 { 00:21:50.680 "name": "nvmf_tgt_poll_group_002", 00:21:50.680 "admin_qpairs": 0, 00:21:50.680 "io_qpairs": 1, 00:21:50.680 "current_admin_qpairs": 0, 00:21:50.680 "current_io_qpairs": 1, 00:21:50.680 "pending_bdev_io": 0, 00:21:50.680 "completed_nvme_io": 20076, 00:21:50.680 "transports": [ 00:21:50.680 { 00:21:50.680 "trtype": "TCP" 00:21:50.680 } 00:21:50.680 ] 00:21:50.680 }, 00:21:50.680 { 00:21:50.680 "name": "nvmf_tgt_poll_group_003", 00:21:50.680 "admin_qpairs": 0, 00:21:50.680 "io_qpairs": 1, 00:21:50.680 "current_admin_qpairs": 0, 00:21:50.680 "current_io_qpairs": 1, 00:21:50.680 "pending_bdev_io": 0, 00:21:50.680 "completed_nvme_io": 20008, 00:21:50.680 "transports": [ 00:21:50.680 { 00:21:50.680 "trtype": "TCP" 00:21:50.680 } 00:21:50.680 ] 00:21:50.680 } 00:21:50.680 ] 00:21:50.680 }' 00:21:50.680 15:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:50.680 15:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:50.680 15:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:50.680 15:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:50.680 15:56:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2485582 00:21:58.868 Initializing NVMe Controllers 00:21:58.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:58.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:58.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:58.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:58.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:58.868 Initialization complete. Launching workers. 00:21:58.868 ======================================================== 00:21:58.868 Latency(us) 00:21:58.868 Device Information : IOPS MiB/s Average min max 00:21:58.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10317.40 40.30 6202.23 2433.69 10907.06 00:21:58.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10407.80 40.66 6149.23 2282.82 11357.07 00:21:58.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10354.30 40.45 6180.49 2120.86 10534.88 00:21:58.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10301.30 40.24 6212.49 2340.40 12968.20 00:21:58.868 ======================================================== 00:21:58.868 Total : 41380.80 161.64 6186.01 2120.86 12968.20 00:21:58.868 00:21:58.868 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:58.868 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.869 rmmod nvme_tcp 00:21:58.869 rmmod nvme_fabrics 00:21:58.869 rmmod nvme_keyring 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 2485358 ']' 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 2485358 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2485358 ']' 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2485358 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2485358 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2485358' 00:21:58.869 killing process with pid 2485358 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2485358 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2485358 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.869 15:56:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.770 15:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:00.770 15:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:00.770 15:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:00.770 15:56:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:01.804 15:56:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:03.778 15:56:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:09.058 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:09.058 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:09.058 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:09.059 Found net devices under 0000:86:00.0: cvl_0_0 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:09.059 Found net devices under 0000:86:00.1: cvl_0_1 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.059 15:56:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:22:09.059 00:22:09.059 --- 10.0.0.2 ping statistics --- 00:22:09.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.059 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:22:09.059 00:22:09.059 --- 10.0.0.1 ping statistics --- 00:22:09.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.059 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:09.059 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:09.319 net.core.busy_poll = 1 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:09.319 net.core.busy_read = 1 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=2489368 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 2489368 00:22:09.319 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:09.320 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2489368 ']' 00:22:09.320 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.320 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:09.320 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.320 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:09.320 15:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.578 [2024-10-01 15:56:19.533517] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:22:09.578 [2024-10-01 15:56:19.533569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.578 [2024-10-01 15:56:19.605469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.578 [2024-10-01 15:56:19.678196] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.578 [2024-10-01 15:56:19.678239] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.578 [2024-10-01 15:56:19.678246] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.578 [2024-10-01 15:56:19.678252] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.579 [2024-10-01 15:56:19.678257] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.579 [2024-10-01 15:56:19.678319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.579 [2024-10-01 15:56:19.678431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.579 [2024-10-01 15:56:19.678538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.579 [2024-10-01 15:56:19.678539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.516 [2024-10-01 15:56:20.553169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.516 Malloc1 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.516 [2024-10-01 15:56:20.604519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2489621 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:10.516 15:56:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:13.046 15:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:13.046 15:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.046 15:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.046 15:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.046 15:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:13.046 "tick_rate": 2100000000, 00:22:13.046 "poll_groups": [ 00:22:13.046 { 00:22:13.046 "name": "nvmf_tgt_poll_group_000", 00:22:13.046 "admin_qpairs": 1, 00:22:13.046 "io_qpairs": 1, 00:22:13.046 "current_admin_qpairs": 1, 00:22:13.046 "current_io_qpairs": 1, 00:22:13.046 "pending_bdev_io": 0, 00:22:13.046 "completed_nvme_io": 24354, 00:22:13.046 "transports": [ 00:22:13.046 { 00:22:13.046 "trtype": "TCP" 00:22:13.046 } 00:22:13.046 ] 00:22:13.046 }, 00:22:13.046 { 00:22:13.046 "name": "nvmf_tgt_poll_group_001", 00:22:13.046 "admin_qpairs": 0, 00:22:13.046 "io_qpairs": 3, 00:22:13.046 "current_admin_qpairs": 0, 00:22:13.046 "current_io_qpairs": 3, 00:22:13.046 "pending_bdev_io": 0, 00:22:13.046 "completed_nvme_io": 31703, 00:22:13.046 "transports": [ 00:22:13.046 { 00:22:13.046 "trtype": "TCP" 00:22:13.046 } 00:22:13.046 ] 00:22:13.046 }, 00:22:13.046 { 00:22:13.046 "name": "nvmf_tgt_poll_group_002", 00:22:13.046 "admin_qpairs": 0, 00:22:13.046 "io_qpairs": 0, 00:22:13.046 "current_admin_qpairs": 0, 00:22:13.046 "current_io_qpairs": 0, 00:22:13.046 "pending_bdev_io": 0, 00:22:13.046 "completed_nvme_io": 0, 00:22:13.046 "transports": [ 00:22:13.046 { 00:22:13.046 "trtype": "TCP" 00:22:13.046 } 00:22:13.046 ] 00:22:13.046 }, 00:22:13.046 { 00:22:13.046 "name": "nvmf_tgt_poll_group_003", 00:22:13.046 "admin_qpairs": 0, 00:22:13.046 "io_qpairs": 0, 00:22:13.046 "current_admin_qpairs": 0, 00:22:13.046 "current_io_qpairs": 0, 00:22:13.046 "pending_bdev_io": 0, 00:22:13.046 "completed_nvme_io": 0, 00:22:13.046 "transports": [ 00:22:13.046 { 00:22:13.046 "trtype": "TCP" 00:22:13.046 } 00:22:13.046 ] 00:22:13.046 } 00:22:13.046 ] 00:22:13.046 }' 00:22:13.046 15:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:13.046 15:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:13.046 15:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:13.046 15:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:13.046 15:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2489621 00:22:21.160 Initializing NVMe Controllers 00:22:21.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:21.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:21.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:21.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:21.160 Initialization complete. Launching workers. 00:22:21.160 ======================================================== 00:22:21.160 Latency(us) 00:22:21.160 Device Information : IOPS MiB/s Average min max 00:22:21.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5743.26 22.43 11176.09 1510.04 60019.60 00:22:21.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14874.41 58.10 4302.08 1542.04 45546.02 00:22:21.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4688.27 18.31 13666.39 1491.57 59626.54 00:22:21.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4862.97 19.00 13159.12 1607.88 58626.30 00:22:21.160 ======================================================== 00:22:21.160 Total : 30168.92 117.85 8493.59 1491.57 60019.60 00:22:21.160 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.160 rmmod nvme_tcp 00:22:21.160 rmmod nvme_fabrics 00:22:21.160 rmmod nvme_keyring 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 2489368 ']' 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 2489368 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2489368 ']' 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2489368 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2489368 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2489368' 00:22:21.160 killing process with pid 2489368 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2489368 00:22:21.160 15:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2489368 00:22:21.160 15:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:21.160 15:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:21.160 15:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:21.160 15:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:21.160 15:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:22:21.161 15:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:21.161 15:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:22:21.161 15:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.161 15:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.161 15:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.161 15:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.161 15:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:24.446 00:22:24.446 real 0m51.251s 00:22:24.446 user 2m49.058s 00:22:24.446 sys 0m10.204s 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.446 ************************************ 00:22:24.446 END TEST nvmf_perf_adq 00:22:24.446 ************************************ 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:24.446 ************************************ 00:22:24.446 START TEST nvmf_shutdown 00:22:24.446 ************************************ 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:24.446 * Looking for test storage... 00:22:24.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.446 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:24.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.447 --rc genhtml_branch_coverage=1 00:22:24.447 --rc genhtml_function_coverage=1 00:22:24.447 --rc genhtml_legend=1 00:22:24.447 --rc geninfo_all_blocks=1 00:22:24.447 --rc geninfo_unexecuted_blocks=1 00:22:24.447 00:22:24.447 ' 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:24.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.447 --rc genhtml_branch_coverage=1 00:22:24.447 --rc genhtml_function_coverage=1 00:22:24.447 --rc genhtml_legend=1 00:22:24.447 --rc geninfo_all_blocks=1 00:22:24.447 --rc geninfo_unexecuted_blocks=1 00:22:24.447 00:22:24.447 ' 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:24.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.447 --rc genhtml_branch_coverage=1 00:22:24.447 --rc genhtml_function_coverage=1 00:22:24.447 --rc genhtml_legend=1 00:22:24.447 --rc geninfo_all_blocks=1 00:22:24.447 --rc geninfo_unexecuted_blocks=1 00:22:24.447 00:22:24.447 ' 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:24.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.447 --rc genhtml_branch_coverage=1 00:22:24.447 --rc genhtml_function_coverage=1 00:22:24.447 --rc genhtml_legend=1 00:22:24.447 --rc geninfo_all_blocks=1 00:22:24.447 --rc geninfo_unexecuted_blocks=1 00:22:24.447 00:22:24.447 ' 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:24.447 ************************************ 00:22:24.447 START TEST nvmf_shutdown_tc1 00:22:24.447 ************************************ 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.447 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.448 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.448 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:24.448 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:24.448 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.448 15:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.016 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.016 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:31.016 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:31.016 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:31.016 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:31.016 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:31.017 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:31.017 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:31.017 Found net devices under 0000:86:00.0: cvl_0_0 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:31.017 Found net devices under 0000:86:00.1: cvl_0_1 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:31.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:22:31.017 00:22:31.017 --- 10.0.0.2 ping statistics --- 00:22:31.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.017 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:22:31.017 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:22:31.017 00:22:31.017 --- 10.0.0.1 ping statistics --- 00:22:31.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.017 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=2495063 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 2495063 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2495063 ']' 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.018 15:56:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.018 [2024-10-01 15:56:40.578576] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:22:31.018 [2024-10-01 15:56:40.578617] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.018 [2024-10-01 15:56:40.636932] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.018 [2024-10-01 15:56:40.721728] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.018 [2024-10-01 15:56:40.721765] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.018 [2024-10-01 15:56:40.721772] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.018 [2024-10-01 15:56:40.721778] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.018 [2024-10-01 15:56:40.721784] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.018 [2024-10-01 15:56:40.721909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.018 [2024-10-01 15:56:40.721938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.018 [2024-10-01 15:56:40.722045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.018 [2024-10-01 15:56:40.722046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:22:31.275 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.275 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:31.275 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:31.275 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.275 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.275 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.275 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.275 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.275 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.531 [2024-10-01 15:56:41.470150] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.531 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.531 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:31.531 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:31.531 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:31.531 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.531 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:31.531 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.531 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.532 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.532 Malloc1 00:22:31.532 [2024-10-01 15:56:41.570056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.532 Malloc2 00:22:31.532 Malloc3 00:22:31.532 Malloc4 00:22:31.532 Malloc5 00:22:31.789 Malloc6 00:22:31.789 Malloc7 00:22:31.789 Malloc8 00:22:31.789 Malloc9 00:22:31.789 Malloc10 00:22:31.789 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.789 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:31.789 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.789 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2495350 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2495350 /var/tmp/bdevperf.sock 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2495350 ']' 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:32.048 { 00:22:32.048 "params": { 00:22:32.048 "name": "Nvme$subsystem", 00:22:32.048 "trtype": "$TEST_TRANSPORT", 00:22:32.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.048 "adrfam": "ipv4", 00:22:32.048 "trsvcid": "$NVMF_PORT", 00:22:32.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.048 "hdgst": ${hdgst:-false}, 00:22:32.048 "ddgst": ${ddgst:-false} 00:22:32.048 }, 00:22:32.048 "method": "bdev_nvme_attach_controller" 00:22:32.048 } 00:22:32.048 EOF 00:22:32.048 )") 00:22:32.048 15:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:32.048 { 00:22:32.048 "params": { 00:22:32.048 "name": "Nvme$subsystem", 00:22:32.048 "trtype": "$TEST_TRANSPORT", 00:22:32.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.048 "adrfam": "ipv4", 00:22:32.048 "trsvcid": "$NVMF_PORT", 00:22:32.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.048 "hdgst": ${hdgst:-false}, 00:22:32.048 "ddgst": ${ddgst:-false} 00:22:32.048 }, 00:22:32.048 "method": "bdev_nvme_attach_controller" 00:22:32.048 } 00:22:32.048 EOF 00:22:32.048 )") 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:32.048 { 00:22:32.048 "params": { 00:22:32.048 "name": "Nvme$subsystem", 00:22:32.048 "trtype": "$TEST_TRANSPORT", 00:22:32.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.048 "adrfam": "ipv4", 00:22:32.048 "trsvcid": "$NVMF_PORT", 00:22:32.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.048 "hdgst": ${hdgst:-false}, 00:22:32.048 "ddgst": ${ddgst:-false} 00:22:32.048 }, 00:22:32.048 "method": "bdev_nvme_attach_controller" 00:22:32.048 } 00:22:32.048 EOF 00:22:32.048 )") 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:32.048 { 00:22:32.048 "params": { 00:22:32.048 "name": "Nvme$subsystem", 00:22:32.048 "trtype": "$TEST_TRANSPORT", 00:22:32.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.048 "adrfam": "ipv4", 00:22:32.048 "trsvcid": "$NVMF_PORT", 00:22:32.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.048 "hdgst": ${hdgst:-false}, 00:22:32.048 "ddgst": ${ddgst:-false} 00:22:32.048 }, 00:22:32.048 "method": "bdev_nvme_attach_controller" 00:22:32.048 } 00:22:32.048 EOF 00:22:32.048 )") 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:32.048 { 00:22:32.048 "params": { 00:22:32.048 "name": "Nvme$subsystem", 00:22:32.048 "trtype": "$TEST_TRANSPORT", 00:22:32.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.048 "adrfam": "ipv4", 00:22:32.048 "trsvcid": "$NVMF_PORT", 00:22:32.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.048 "hdgst": ${hdgst:-false}, 00:22:32.048 "ddgst": ${ddgst:-false} 00:22:32.048 }, 00:22:32.048 "method": "bdev_nvme_attach_controller" 00:22:32.048 } 00:22:32.048 EOF 00:22:32.048 )") 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:32.048 { 00:22:32.048 "params": { 00:22:32.048 "name": "Nvme$subsystem", 00:22:32.048 "trtype": "$TEST_TRANSPORT", 00:22:32.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.048 "adrfam": "ipv4", 00:22:32.048 "trsvcid": "$NVMF_PORT", 00:22:32.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.048 "hdgst": ${hdgst:-false}, 00:22:32.048 "ddgst": ${ddgst:-false} 00:22:32.048 }, 00:22:32.048 "method": "bdev_nvme_attach_controller" 00:22:32.048 } 00:22:32.048 EOF 00:22:32.048 )") 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:32.048 { 00:22:32.048 "params": { 00:22:32.048 "name": "Nvme$subsystem", 00:22:32.048 "trtype": "$TEST_TRANSPORT", 00:22:32.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.048 "adrfam": "ipv4", 00:22:32.048 "trsvcid": "$NVMF_PORT", 00:22:32.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.048 "hdgst": ${hdgst:-false}, 00:22:32.048 "ddgst": ${ddgst:-false} 00:22:32.048 }, 00:22:32.048 "method": "bdev_nvme_attach_controller" 00:22:32.048 } 00:22:32.048 EOF 00:22:32.048 )") 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:32.048 [2024-10-01 15:56:42.041969] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:22:32.048 [2024-10-01 15:56:42.042021] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:32.048 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:32.049 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:32.049 { 00:22:32.049 "params": { 00:22:32.049 "name": "Nvme$subsystem", 00:22:32.049 "trtype": "$TEST_TRANSPORT", 00:22:32.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.049 "adrfam": "ipv4", 00:22:32.049 "trsvcid": "$NVMF_PORT", 00:22:32.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.049 "hdgst": ${hdgst:-false}, 00:22:32.049 "ddgst": ${ddgst:-false} 00:22:32.049 }, 00:22:32.049 "method": "bdev_nvme_attach_controller" 00:22:32.049 } 00:22:32.049 EOF 00:22:32.049 )") 00:22:32.049 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:32.049 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:32.049 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:32.049 { 00:22:32.049 "params": { 00:22:32.049 "name": "Nvme$subsystem", 00:22:32.049 "trtype": "$TEST_TRANSPORT", 00:22:32.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.049 "adrfam": "ipv4", 00:22:32.049 "trsvcid": "$NVMF_PORT", 00:22:32.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.049 "hdgst": ${hdgst:-false}, 00:22:32.049 "ddgst": ${ddgst:-false} 00:22:32.049 }, 00:22:32.049 "method": "bdev_nvme_attach_controller" 00:22:32.049 } 00:22:32.049 EOF 00:22:32.049 )") 00:22:32.049 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:32.049 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:32.049 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:32.049 { 00:22:32.049 "params": { 00:22:32.049 "name": "Nvme$subsystem", 00:22:32.049 "trtype": "$TEST_TRANSPORT", 00:22:32.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.049 "adrfam": "ipv4", 00:22:32.049 "trsvcid": "$NVMF_PORT", 00:22:32.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.049 "hdgst": ${hdgst:-false}, 00:22:32.049 "ddgst": ${ddgst:-false} 00:22:32.049 }, 00:22:32.049 "method": "bdev_nvme_attach_controller" 00:22:32.049 } 00:22:32.049 EOF 00:22:32.049 )") 00:22:32.049 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:32.049 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:22:32.049 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:22:32.049 15:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:22:32.049 "params": { 00:22:32.049 "name": "Nvme1", 00:22:32.049 "trtype": "tcp", 00:22:32.049 "traddr": "10.0.0.2", 00:22:32.049 "adrfam": "ipv4", 00:22:32.049 "trsvcid": "4420", 00:22:32.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.049 "hdgst": false, 00:22:32.049 "ddgst": false 00:22:32.049 }, 00:22:32.049 "method": "bdev_nvme_attach_controller" 00:22:32.049 },{ 00:22:32.049 "params": { 00:22:32.049 "name": "Nvme2", 00:22:32.049 "trtype": "tcp", 00:22:32.049 "traddr": "10.0.0.2", 00:22:32.049 "adrfam": "ipv4", 00:22:32.049 "trsvcid": "4420", 00:22:32.049 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:32.049 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:32.049 "hdgst": false, 00:22:32.049 "ddgst": false 00:22:32.049 }, 00:22:32.049 "method": "bdev_nvme_attach_controller" 00:22:32.049 },{ 00:22:32.049 "params": { 00:22:32.049 "name": "Nvme3", 00:22:32.049 "trtype": "tcp", 00:22:32.049 "traddr": "10.0.0.2", 00:22:32.049 "adrfam": "ipv4", 00:22:32.049 "trsvcid": "4420", 00:22:32.049 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:32.049 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:32.049 "hdgst": false, 00:22:32.049 "ddgst": false 00:22:32.049 }, 00:22:32.049 "method": "bdev_nvme_attach_controller" 00:22:32.049 },{ 00:22:32.049 "params": { 00:22:32.049 "name": "Nvme4", 00:22:32.049 "trtype": "tcp", 00:22:32.049 "traddr": "10.0.0.2", 00:22:32.049 "adrfam": "ipv4", 00:22:32.049 "trsvcid": "4420", 00:22:32.049 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:32.049 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:32.049 "hdgst": false, 00:22:32.049 "ddgst": false 00:22:32.049 }, 00:22:32.049 "method": "bdev_nvme_attach_controller" 00:22:32.049 },{ 00:22:32.049 "params": { 00:22:32.049 "name": "Nvme5", 00:22:32.049 "trtype": "tcp", 00:22:32.049 "traddr": "10.0.0.2", 00:22:32.049 "adrfam": "ipv4", 00:22:32.049 "trsvcid": "4420", 00:22:32.049 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:32.049 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:32.049 "hdgst": false, 00:22:32.049 "ddgst": false 00:22:32.049 }, 00:22:32.049 "method": "bdev_nvme_attach_controller" 00:22:32.049 },{ 00:22:32.049 "params": { 00:22:32.049 "name": "Nvme6", 00:22:32.049 "trtype": "tcp", 00:22:32.049 "traddr": "10.0.0.2", 00:22:32.049 "adrfam": "ipv4", 00:22:32.049 "trsvcid": "4420", 00:22:32.049 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:32.049 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:32.049 "hdgst": false, 00:22:32.049 "ddgst": false 00:22:32.049 }, 00:22:32.049 "method": "bdev_nvme_attach_controller" 00:22:32.049 },{ 00:22:32.049 "params": { 00:22:32.049 "name": "Nvme7", 00:22:32.049 "trtype": "tcp", 00:22:32.049 "traddr": "10.0.0.2", 00:22:32.049 "adrfam": "ipv4", 00:22:32.049 "trsvcid": "4420", 00:22:32.049 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:32.049 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:32.049 "hdgst": false, 00:22:32.049 "ddgst": false 00:22:32.049 }, 00:22:32.049 "method": "bdev_nvme_attach_controller" 00:22:32.049 },{ 00:22:32.049 "params": { 00:22:32.049 "name": "Nvme8", 00:22:32.049 "trtype": "tcp", 00:22:32.049 "traddr": "10.0.0.2", 00:22:32.049 "adrfam": "ipv4", 00:22:32.049 "trsvcid": "4420", 00:22:32.049 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:32.049 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:32.049 "hdgst": false, 00:22:32.049 "ddgst": false 00:22:32.049 }, 00:22:32.049 "method": "bdev_nvme_attach_controller" 00:22:32.049 },{ 00:22:32.049 "params": { 00:22:32.049 "name": "Nvme9", 00:22:32.049 "trtype": "tcp", 00:22:32.049 "traddr": "10.0.0.2", 00:22:32.049 "adrfam": "ipv4", 00:22:32.049 "trsvcid": "4420", 00:22:32.049 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:32.049 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:32.049 "hdgst": false, 00:22:32.049 "ddgst": false 00:22:32.049 }, 00:22:32.049 "method": "bdev_nvme_attach_controller" 00:22:32.049 },{ 00:22:32.049 "params": { 00:22:32.049 "name": "Nvme10", 00:22:32.049 "trtype": "tcp", 00:22:32.049 "traddr": "10.0.0.2", 00:22:32.049 "adrfam": "ipv4", 00:22:32.049 "trsvcid": "4420", 00:22:32.049 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:32.049 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:32.049 "hdgst": false, 00:22:32.049 "ddgst": false 00:22:32.049 }, 00:22:32.049 "method": "bdev_nvme_attach_controller" 00:22:32.049 }' 00:22:32.049 [2024-10-01 15:56:42.112235] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.049 [2024-10-01 15:56:42.184283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.422 15:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.422 15:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:33.422 15:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:33.422 15:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.422 15:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.422 15:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.422 15:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2495350 00:22:33.422 15:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:33.422 15:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:34.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2495350 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2495063 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:34.359 { 00:22:34.359 "params": { 00:22:34.359 "name": "Nvme$subsystem", 00:22:34.359 "trtype": "$TEST_TRANSPORT", 00:22:34.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.359 "adrfam": "ipv4", 00:22:34.359 "trsvcid": "$NVMF_PORT", 00:22:34.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.359 "hdgst": ${hdgst:-false}, 00:22:34.359 "ddgst": ${ddgst:-false} 00:22:34.359 }, 00:22:34.359 "method": "bdev_nvme_attach_controller" 00:22:34.359 } 00:22:34.359 EOF 00:22:34.359 )") 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:34.359 { 00:22:34.359 "params": { 00:22:34.359 "name": "Nvme$subsystem", 00:22:34.359 "trtype": "$TEST_TRANSPORT", 00:22:34.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.359 "adrfam": "ipv4", 00:22:34.359 "trsvcid": "$NVMF_PORT", 00:22:34.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.359 "hdgst": ${hdgst:-false}, 00:22:34.359 "ddgst": ${ddgst:-false} 00:22:34.359 }, 00:22:34.359 "method": "bdev_nvme_attach_controller" 00:22:34.359 } 00:22:34.359 EOF 00:22:34.359 )") 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:34.359 { 00:22:34.359 "params": { 00:22:34.359 "name": "Nvme$subsystem", 00:22:34.359 "trtype": "$TEST_TRANSPORT", 00:22:34.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.359 "adrfam": "ipv4", 00:22:34.359 "trsvcid": "$NVMF_PORT", 00:22:34.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.359 "hdgst": ${hdgst:-false}, 00:22:34.359 "ddgst": ${ddgst:-false} 00:22:34.359 }, 00:22:34.359 "method": "bdev_nvme_attach_controller" 00:22:34.359 } 00:22:34.359 EOF 00:22:34.359 )") 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:34.359 { 00:22:34.359 "params": { 00:22:34.359 "name": "Nvme$subsystem", 00:22:34.359 "trtype": "$TEST_TRANSPORT", 00:22:34.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.359 "adrfam": "ipv4", 00:22:34.359 "trsvcid": "$NVMF_PORT", 00:22:34.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.359 "hdgst": ${hdgst:-false}, 00:22:34.359 "ddgst": ${ddgst:-false} 00:22:34.359 }, 00:22:34.359 "method": "bdev_nvme_attach_controller" 00:22:34.359 } 00:22:34.359 EOF 00:22:34.359 )") 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:34.359 { 00:22:34.359 "params": { 00:22:34.359 "name": "Nvme$subsystem", 00:22:34.359 "trtype": "$TEST_TRANSPORT", 00:22:34.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.359 "adrfam": "ipv4", 00:22:34.359 "trsvcid": "$NVMF_PORT", 00:22:34.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.359 "hdgst": ${hdgst:-false}, 00:22:34.359 "ddgst": ${ddgst:-false} 00:22:34.359 }, 00:22:34.359 "method": "bdev_nvme_attach_controller" 00:22:34.359 } 00:22:34.359 EOF 00:22:34.359 )") 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:34.359 { 00:22:34.359 "params": { 00:22:34.359 "name": "Nvme$subsystem", 00:22:34.359 "trtype": "$TEST_TRANSPORT", 00:22:34.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.359 "adrfam": "ipv4", 00:22:34.359 "trsvcid": "$NVMF_PORT", 00:22:34.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.359 "hdgst": ${hdgst:-false}, 00:22:34.359 "ddgst": ${ddgst:-false} 00:22:34.359 }, 00:22:34.359 "method": "bdev_nvme_attach_controller" 00:22:34.359 } 00:22:34.359 EOF 00:22:34.359 )") 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:34.359 { 00:22:34.359 "params": { 00:22:34.359 "name": "Nvme$subsystem", 00:22:34.359 "trtype": "$TEST_TRANSPORT", 00:22:34.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.359 "adrfam": "ipv4", 00:22:34.359 "trsvcid": "$NVMF_PORT", 00:22:34.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.359 "hdgst": ${hdgst:-false}, 00:22:34.359 "ddgst": ${ddgst:-false} 00:22:34.359 }, 00:22:34.359 "method": "bdev_nvme_attach_controller" 00:22:34.359 } 00:22:34.359 EOF 00:22:34.359 )") 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:34.359 [2024-10-01 15:56:44.485291] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:22:34.359 [2024-10-01 15:56:44.485337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495790 ] 00:22:34.359 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:34.360 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:34.360 { 00:22:34.360 "params": { 00:22:34.360 "name": "Nvme$subsystem", 00:22:34.360 "trtype": "$TEST_TRANSPORT", 00:22:34.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.360 "adrfam": "ipv4", 00:22:34.360 "trsvcid": "$NVMF_PORT", 00:22:34.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.360 "hdgst": ${hdgst:-false}, 00:22:34.360 "ddgst": ${ddgst:-false} 00:22:34.360 }, 00:22:34.360 "method": "bdev_nvme_attach_controller" 00:22:34.360 } 00:22:34.360 EOF 00:22:34.360 )") 00:22:34.360 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:34.360 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:34.360 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:34.360 { 00:22:34.360 "params": { 00:22:34.360 "name": "Nvme$subsystem", 00:22:34.360 "trtype": "$TEST_TRANSPORT", 00:22:34.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.360 "adrfam": "ipv4", 00:22:34.360 "trsvcid": "$NVMF_PORT", 00:22:34.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.360 "hdgst": ${hdgst:-false}, 00:22:34.360 "ddgst": ${ddgst:-false} 00:22:34.360 }, 00:22:34.360 "method": "bdev_nvme_attach_controller" 00:22:34.360 } 00:22:34.360 EOF 00:22:34.360 )") 00:22:34.360 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:34.360 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:34.360 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:34.360 { 00:22:34.360 "params": { 00:22:34.360 "name": "Nvme$subsystem", 00:22:34.360 "trtype": "$TEST_TRANSPORT", 00:22:34.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.360 "adrfam": "ipv4", 00:22:34.360 "trsvcid": "$NVMF_PORT", 00:22:34.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.360 "hdgst": ${hdgst:-false}, 00:22:34.360 "ddgst": ${ddgst:-false} 00:22:34.360 }, 00:22:34.360 "method": "bdev_nvme_attach_controller" 00:22:34.360 } 00:22:34.360 EOF 00:22:34.360 )") 00:22:34.360 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:34.360 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:22:34.360 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:22:34.360 15:56:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:22:34.360 "params": { 00:22:34.360 "name": "Nvme1", 00:22:34.360 "trtype": "tcp", 00:22:34.360 "traddr": "10.0.0.2", 00:22:34.360 "adrfam": "ipv4", 00:22:34.360 "trsvcid": "4420", 00:22:34.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.360 "hdgst": false, 00:22:34.360 "ddgst": false 00:22:34.360 }, 00:22:34.360 "method": "bdev_nvme_attach_controller" 00:22:34.360 },{ 00:22:34.360 "params": { 00:22:34.360 "name": "Nvme2", 00:22:34.360 "trtype": "tcp", 00:22:34.360 "traddr": "10.0.0.2", 00:22:34.360 "adrfam": "ipv4", 00:22:34.360 "trsvcid": "4420", 00:22:34.360 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:34.360 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:34.360 "hdgst": false, 00:22:34.360 "ddgst": false 00:22:34.360 }, 00:22:34.360 "method": "bdev_nvme_attach_controller" 00:22:34.360 },{ 00:22:34.360 "params": { 00:22:34.360 "name": "Nvme3", 00:22:34.360 "trtype": "tcp", 00:22:34.360 "traddr": "10.0.0.2", 00:22:34.360 "adrfam": "ipv4", 00:22:34.360 "trsvcid": "4420", 00:22:34.360 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:34.360 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:34.360 "hdgst": false, 00:22:34.360 "ddgst": false 00:22:34.360 }, 00:22:34.360 "method": "bdev_nvme_attach_controller" 00:22:34.360 },{ 00:22:34.360 "params": { 00:22:34.360 "name": "Nvme4", 00:22:34.360 "trtype": "tcp", 00:22:34.360 "traddr": "10.0.0.2", 00:22:34.360 "adrfam": "ipv4", 00:22:34.360 "trsvcid": "4420", 00:22:34.360 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:34.360 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:34.360 "hdgst": false, 00:22:34.360 "ddgst": false 00:22:34.360 }, 00:22:34.360 "method": "bdev_nvme_attach_controller" 00:22:34.360 },{ 00:22:34.360 "params": { 00:22:34.360 "name": "Nvme5", 00:22:34.360 "trtype": "tcp", 00:22:34.360 "traddr": "10.0.0.2", 00:22:34.360 "adrfam": "ipv4", 00:22:34.360 "trsvcid": "4420", 00:22:34.360 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:34.360 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:34.360 "hdgst": false, 00:22:34.360 "ddgst": false 00:22:34.360 }, 00:22:34.360 "method": "bdev_nvme_attach_controller" 00:22:34.360 },{ 00:22:34.360 "params": { 00:22:34.360 "name": "Nvme6", 00:22:34.360 "trtype": "tcp", 00:22:34.360 "traddr": "10.0.0.2", 00:22:34.360 "adrfam": "ipv4", 00:22:34.360 "trsvcid": "4420", 00:22:34.360 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:34.360 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:34.360 "hdgst": false, 00:22:34.360 "ddgst": false 00:22:34.360 }, 00:22:34.360 "method": "bdev_nvme_attach_controller" 00:22:34.360 },{ 00:22:34.360 "params": { 00:22:34.360 "name": "Nvme7", 00:22:34.360 "trtype": "tcp", 00:22:34.360 "traddr": "10.0.0.2", 00:22:34.360 "adrfam": "ipv4", 00:22:34.360 "trsvcid": "4420", 00:22:34.360 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:34.360 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:34.360 "hdgst": false, 00:22:34.360 "ddgst": false 00:22:34.360 }, 00:22:34.360 "method": "bdev_nvme_attach_controller" 00:22:34.360 },{ 00:22:34.360 "params": { 00:22:34.360 "name": "Nvme8", 00:22:34.360 "trtype": "tcp", 00:22:34.360 "traddr": "10.0.0.2", 00:22:34.360 "adrfam": "ipv4", 00:22:34.360 "trsvcid": "4420", 00:22:34.360 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:34.360 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:34.360 "hdgst": false, 00:22:34.360 "ddgst": false 00:22:34.360 }, 00:22:34.360 "method": "bdev_nvme_attach_controller" 00:22:34.360 },{ 00:22:34.360 "params": { 00:22:34.360 "name": "Nvme9", 00:22:34.360 "trtype": "tcp", 00:22:34.360 "traddr": "10.0.0.2", 00:22:34.360 "adrfam": "ipv4", 00:22:34.360 "trsvcid": "4420", 00:22:34.360 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:34.360 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:34.360 "hdgst": false, 00:22:34.360 "ddgst": false 00:22:34.360 }, 00:22:34.360 "method": "bdev_nvme_attach_controller" 00:22:34.360 },{ 00:22:34.360 "params": { 00:22:34.360 "name": "Nvme10", 00:22:34.360 "trtype": "tcp", 00:22:34.360 "traddr": "10.0.0.2", 00:22:34.360 "adrfam": "ipv4", 00:22:34.360 "trsvcid": "4420", 00:22:34.360 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:34.360 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:34.360 "hdgst": false, 00:22:34.360 "ddgst": false 00:22:34.360 }, 00:22:34.360 "method": "bdev_nvme_attach_controller" 00:22:34.360 }' 00:22:34.619 [2024-10-01 15:56:44.553986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.619 [2024-10-01 15:56:44.626584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.993 Running I/O for 1 seconds... 00:22:37.187 2258.00 IOPS, 141.12 MiB/s 00:22:37.187 Latency(us) 00:22:37.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.187 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.187 Verification LBA range: start 0x0 length 0x400 00:22:37.187 Nvme1n1 : 1.12 227.71 14.23 0.00 0.00 278089.87 19099.06 247663.66 00:22:37.187 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.187 Verification LBA range: start 0x0 length 0x400 00:22:37.187 Nvme2n1 : 1.13 282.59 17.66 0.00 0.00 219698.71 17226.61 211712.49 00:22:37.187 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.187 Verification LBA range: start 0x0 length 0x400 00:22:37.187 Nvme3n1 : 1.09 301.41 18.84 0.00 0.00 200586.07 3510.86 211712.49 00:22:37.187 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.187 Verification LBA range: start 0x0 length 0x400 00:22:37.187 Nvme4n1 : 1.13 286.38 17.90 0.00 0.00 212311.31 4306.65 211712.49 00:22:37.187 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.187 Verification LBA range: start 0x0 length 0x400 00:22:37.187 Nvme5n1 : 1.14 280.41 17.53 0.00 0.00 213960.61 17975.59 225693.50 00:22:37.187 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.187 Verification LBA range: start 0x0 length 0x400 00:22:37.187 Nvme6n1 : 1.15 279.42 17.46 0.00 0.00 211711.90 18225.25 222697.57 00:22:37.187 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.187 Verification LBA range: start 0x0 length 0x400 00:22:37.187 Nvme7n1 : 1.14 281.61 17.60 0.00 0.00 206938.89 18350.08 227690.79 00:22:37.187 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.187 Verification LBA range: start 0x0 length 0x400 00:22:37.187 Nvme8n1 : 1.12 290.40 18.15 0.00 0.00 196720.97 5305.30 212711.13 00:22:37.187 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.187 Verification LBA range: start 0x0 length 0x400 00:22:37.187 Nvme9n1 : 1.15 278.72 17.42 0.00 0.00 203100.55 17601.10 224694.86 00:22:37.187 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.187 Verification LBA range: start 0x0 length 0x400 00:22:37.187 Nvme10n1 : 1.15 278.03 17.38 0.00 0.00 200581.51 16227.96 225693.50 00:22:37.187 =================================================================================================================== 00:22:37.187 Total : 2786.68 174.17 0.00 0.00 213007.13 3510.86 247663.66 00:22:37.187 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:37.187 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:37.187 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:37.187 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:37.187 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:37.187 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:37.187 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:37.187 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.187 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:37.187 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.187 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.187 rmmod nvme_tcp 00:22:37.187 rmmod nvme_fabrics 00:22:37.446 rmmod nvme_keyring 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 2495063 ']' 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 2495063 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2495063 ']' 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2495063 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2495063 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2495063' 00:22:37.446 killing process with pid 2495063 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2495063 00:22:37.446 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2495063 00:22:37.705 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:37.705 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:37.705 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:37.705 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:37.705 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:22:37.705 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:37.705 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:22:37.705 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:37.705 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:37.705 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.705 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.705 15:56:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.239 15:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:40.239 00:22:40.239 real 0m15.453s 00:22:40.239 user 0m33.912s 00:22:40.239 sys 0m5.928s 00:22:40.239 15:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:40.239 15:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.239 ************************************ 00:22:40.239 END TEST nvmf_shutdown_tc1 00:22:40.239 ************************************ 00:22:40.239 15:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:40.239 15:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:40.239 15:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:40.239 15:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:40.239 ************************************ 00:22:40.239 START TEST nvmf_shutdown_tc2 00:22:40.239 ************************************ 00:22:40.239 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:40.239 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:40.239 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:40.239 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:40.239 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.239 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:40.239 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:40.239 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:40.239 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.239 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.239 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.239 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:40.239 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:40.239 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:40.240 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:40.240 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:40.240 Found net devices under 0000:86:00.0: cvl_0_0 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:40.240 Found net devices under 0000:86:00.1: cvl_0_1 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:40.240 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:22:40.241 00:22:40.241 --- 10.0.0.2 ping statistics --- 00:22:40.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.241 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:22:40.241 00:22:40.241 --- 10.0.0.1 ping statistics --- 00:22:40.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.241 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=2496856 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 2496856 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2496856 ']' 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.241 15:56:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.241 [2024-10-01 15:56:50.385415] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:22:40.241 [2024-10-01 15:56:50.385458] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.500 [2024-10-01 15:56:50.454839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.500 [2024-10-01 15:56:50.534236] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.500 [2024-10-01 15:56:50.534274] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.500 [2024-10-01 15:56:50.534282] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.500 [2024-10-01 15:56:50.534288] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.500 [2024-10-01 15:56:50.534293] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.500 [2024-10-01 15:56:50.534409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.500 [2024-10-01 15:56:50.534528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.500 [2024-10-01 15:56:50.534636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.500 [2024-10-01 15:56:50.534637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:22:41.067 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:41.067 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:41.067 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:41.067 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:41.067 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.067 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.067 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:41.067 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.067 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.067 [2024-10-01 15:56:51.247716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.067 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.067 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:41.067 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:41.067 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:41.067 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.326 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.326 Malloc1 00:22:41.326 [2024-10-01 15:56:51.347555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.326 Malloc2 00:22:41.326 Malloc3 00:22:41.326 Malloc4 00:22:41.326 Malloc5 00:22:41.584 Malloc6 00:22:41.584 Malloc7 00:22:41.584 Malloc8 00:22:41.584 Malloc9 00:22:41.584 Malloc10 00:22:41.584 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.584 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:41.584 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:41.584 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.843 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2497135 00:22:41.843 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2497135 /var/tmp/bdevperf.sock 00:22:41.843 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2497135 ']' 00:22:41.843 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.843 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:41.843 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:41.843 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:41.843 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.843 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:22:41.843 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:41.843 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:22:41.843 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.843 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:41.843 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:41.844 { 00:22:41.844 "params": { 00:22:41.844 "name": "Nvme$subsystem", 00:22:41.844 "trtype": "$TEST_TRANSPORT", 00:22:41.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.844 "adrfam": "ipv4", 00:22:41.844 "trsvcid": "$NVMF_PORT", 00:22:41.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.844 "hdgst": ${hdgst:-false}, 00:22:41.844 "ddgst": ${ddgst:-false} 00:22:41.844 }, 00:22:41.844 "method": "bdev_nvme_attach_controller" 00:22:41.844 } 00:22:41.844 EOF 00:22:41.844 )") 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:41.844 { 00:22:41.844 "params": { 00:22:41.844 "name": "Nvme$subsystem", 00:22:41.844 "trtype": "$TEST_TRANSPORT", 00:22:41.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.844 "adrfam": "ipv4", 00:22:41.844 "trsvcid": "$NVMF_PORT", 00:22:41.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.844 "hdgst": ${hdgst:-false}, 00:22:41.844 "ddgst": ${ddgst:-false} 00:22:41.844 }, 00:22:41.844 "method": "bdev_nvme_attach_controller" 00:22:41.844 } 00:22:41.844 EOF 00:22:41.844 )") 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:41.844 { 00:22:41.844 "params": { 00:22:41.844 "name": "Nvme$subsystem", 00:22:41.844 "trtype": "$TEST_TRANSPORT", 00:22:41.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.844 "adrfam": "ipv4", 00:22:41.844 "trsvcid": "$NVMF_PORT", 00:22:41.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.844 "hdgst": ${hdgst:-false}, 00:22:41.844 "ddgst": ${ddgst:-false} 00:22:41.844 }, 00:22:41.844 "method": "bdev_nvme_attach_controller" 00:22:41.844 } 00:22:41.844 EOF 00:22:41.844 )") 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:41.844 { 00:22:41.844 "params": { 00:22:41.844 "name": "Nvme$subsystem", 00:22:41.844 "trtype": "$TEST_TRANSPORT", 00:22:41.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.844 "adrfam": "ipv4", 00:22:41.844 "trsvcid": "$NVMF_PORT", 00:22:41.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.844 "hdgst": ${hdgst:-false}, 00:22:41.844 "ddgst": ${ddgst:-false} 00:22:41.844 }, 00:22:41.844 "method": "bdev_nvme_attach_controller" 00:22:41.844 } 00:22:41.844 EOF 00:22:41.844 )") 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:41.844 { 00:22:41.844 "params": { 00:22:41.844 "name": "Nvme$subsystem", 00:22:41.844 "trtype": "$TEST_TRANSPORT", 00:22:41.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.844 "adrfam": "ipv4", 00:22:41.844 "trsvcid": "$NVMF_PORT", 00:22:41.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.844 "hdgst": ${hdgst:-false}, 00:22:41.844 "ddgst": ${ddgst:-false} 00:22:41.844 }, 00:22:41.844 "method": "bdev_nvme_attach_controller" 00:22:41.844 } 00:22:41.844 EOF 00:22:41.844 )") 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:41.844 { 00:22:41.844 "params": { 00:22:41.844 "name": "Nvme$subsystem", 00:22:41.844 "trtype": "$TEST_TRANSPORT", 00:22:41.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.844 "adrfam": "ipv4", 00:22:41.844 "trsvcid": "$NVMF_PORT", 00:22:41.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.844 "hdgst": ${hdgst:-false}, 00:22:41.844 "ddgst": ${ddgst:-false} 00:22:41.844 }, 00:22:41.844 "method": "bdev_nvme_attach_controller" 00:22:41.844 } 00:22:41.844 EOF 00:22:41.844 )") 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:41.844 [2024-10-01 15:56:51.823518] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:41.844 { 00:22:41.844 "params": { 00:22:41.844 "name": "Nvme$subsystem", 00:22:41.844 "trtype": "$TEST_TRANSPORT", 00:22:41.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.844 "adrfam": "ipv4", 00:22:41.844 "trsvcid": "$NVMF_PORT", 00:22:41.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.844 "hdgst": ${hdgst:-false}, 00:22:41.844 "ddgst": ${ddgst:-false} 00:22:41.844 }, 00:22:41.844 "method": "bdev_nvme_attach_controller" 00:22:41.844 } 00:22:41.844 EOF 00:22:41.844 )") 00:22:41.844 [2024-10-01 15:56:51.823570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497135 ] 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:41.844 { 00:22:41.844 "params": { 00:22:41.844 "name": "Nvme$subsystem", 00:22:41.844 "trtype": "$TEST_TRANSPORT", 00:22:41.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.844 "adrfam": "ipv4", 00:22:41.844 "trsvcid": "$NVMF_PORT", 00:22:41.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.844 "hdgst": ${hdgst:-false}, 00:22:41.844 "ddgst": ${ddgst:-false} 00:22:41.844 }, 00:22:41.844 "method": "bdev_nvme_attach_controller" 00:22:41.844 } 00:22:41.844 EOF 00:22:41.844 )") 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:41.844 { 00:22:41.844 "params": { 00:22:41.844 "name": "Nvme$subsystem", 00:22:41.844 "trtype": "$TEST_TRANSPORT", 00:22:41.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.844 "adrfam": "ipv4", 00:22:41.844 "trsvcid": "$NVMF_PORT", 00:22:41.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.844 "hdgst": ${hdgst:-false}, 00:22:41.844 "ddgst": ${ddgst:-false} 00:22:41.844 }, 00:22:41.844 "method": "bdev_nvme_attach_controller" 00:22:41.844 } 00:22:41.844 EOF 00:22:41.844 )") 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:41.844 { 00:22:41.844 "params": { 00:22:41.844 "name": "Nvme$subsystem", 00:22:41.844 "trtype": "$TEST_TRANSPORT", 00:22:41.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.844 "adrfam": "ipv4", 00:22:41.844 "trsvcid": "$NVMF_PORT", 00:22:41.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.844 "hdgst": ${hdgst:-false}, 00:22:41.844 "ddgst": ${ddgst:-false} 00:22:41.844 }, 00:22:41.844 "method": "bdev_nvme_attach_controller" 00:22:41.844 } 00:22:41.844 EOF 00:22:41.844 )") 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:22:41.844 15:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:22:41.844 "params": { 00:22:41.844 "name": "Nvme1", 00:22:41.844 "trtype": "tcp", 00:22:41.844 "traddr": "10.0.0.2", 00:22:41.844 "adrfam": "ipv4", 00:22:41.844 "trsvcid": "4420", 00:22:41.844 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.844 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:41.844 "hdgst": false, 00:22:41.844 "ddgst": false 00:22:41.844 }, 00:22:41.844 "method": "bdev_nvme_attach_controller" 00:22:41.844 },{ 00:22:41.845 "params": { 00:22:41.845 "name": "Nvme2", 00:22:41.845 "trtype": "tcp", 00:22:41.845 "traddr": "10.0.0.2", 00:22:41.845 "adrfam": "ipv4", 00:22:41.845 "trsvcid": "4420", 00:22:41.845 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:41.845 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:41.845 "hdgst": false, 00:22:41.845 "ddgst": false 00:22:41.845 }, 00:22:41.845 "method": "bdev_nvme_attach_controller" 00:22:41.845 },{ 00:22:41.845 "params": { 00:22:41.845 "name": "Nvme3", 00:22:41.845 "trtype": "tcp", 00:22:41.845 "traddr": "10.0.0.2", 00:22:41.845 "adrfam": "ipv4", 00:22:41.845 "trsvcid": "4420", 00:22:41.845 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:41.845 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:41.845 "hdgst": false, 00:22:41.845 "ddgst": false 00:22:41.845 }, 00:22:41.845 "method": "bdev_nvme_attach_controller" 00:22:41.845 },{ 00:22:41.845 "params": { 00:22:41.845 "name": "Nvme4", 00:22:41.845 "trtype": "tcp", 00:22:41.845 "traddr": "10.0.0.2", 00:22:41.845 "adrfam": "ipv4", 00:22:41.845 "trsvcid": "4420", 00:22:41.845 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:41.845 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:41.845 "hdgst": false, 00:22:41.845 "ddgst": false 00:22:41.845 }, 00:22:41.845 "method": "bdev_nvme_attach_controller" 00:22:41.845 },{ 00:22:41.845 "params": { 00:22:41.845 "name": "Nvme5", 00:22:41.845 "trtype": "tcp", 00:22:41.845 "traddr": "10.0.0.2", 00:22:41.845 "adrfam": "ipv4", 00:22:41.845 "trsvcid": "4420", 00:22:41.845 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:41.845 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:41.845 "hdgst": false, 00:22:41.845 "ddgst": false 00:22:41.845 }, 00:22:41.845 "method": "bdev_nvme_attach_controller" 00:22:41.845 },{ 00:22:41.845 "params": { 00:22:41.845 "name": "Nvme6", 00:22:41.845 "trtype": "tcp", 00:22:41.845 "traddr": "10.0.0.2", 00:22:41.845 "adrfam": "ipv4", 00:22:41.845 "trsvcid": "4420", 00:22:41.845 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:41.845 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:41.845 "hdgst": false, 00:22:41.845 "ddgst": false 00:22:41.845 }, 00:22:41.845 "method": "bdev_nvme_attach_controller" 00:22:41.845 },{ 00:22:41.845 "params": { 00:22:41.845 "name": "Nvme7", 00:22:41.845 "trtype": "tcp", 00:22:41.845 "traddr": "10.0.0.2", 00:22:41.845 "adrfam": "ipv4", 00:22:41.845 "trsvcid": "4420", 00:22:41.845 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:41.845 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:41.845 "hdgst": false, 00:22:41.845 "ddgst": false 00:22:41.845 }, 00:22:41.845 "method": "bdev_nvme_attach_controller" 00:22:41.845 },{ 00:22:41.845 "params": { 00:22:41.845 "name": "Nvme8", 00:22:41.845 "trtype": "tcp", 00:22:41.845 "traddr": "10.0.0.2", 00:22:41.845 "adrfam": "ipv4", 00:22:41.845 "trsvcid": "4420", 00:22:41.845 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:41.845 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:41.845 "hdgst": false, 00:22:41.845 "ddgst": false 00:22:41.845 }, 00:22:41.845 "method": "bdev_nvme_attach_controller" 00:22:41.845 },{ 00:22:41.845 "params": { 00:22:41.845 "name": "Nvme9", 00:22:41.845 "trtype": "tcp", 00:22:41.845 "traddr": "10.0.0.2", 00:22:41.845 "adrfam": "ipv4", 00:22:41.845 "trsvcid": "4420", 00:22:41.845 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:41.845 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:41.845 "hdgst": false, 00:22:41.845 "ddgst": false 00:22:41.845 }, 00:22:41.845 "method": "bdev_nvme_attach_controller" 00:22:41.845 },{ 00:22:41.845 "params": { 00:22:41.845 "name": "Nvme10", 00:22:41.845 "trtype": "tcp", 00:22:41.845 "traddr": "10.0.0.2", 00:22:41.845 "adrfam": "ipv4", 00:22:41.845 "trsvcid": "4420", 00:22:41.845 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:41.845 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:41.845 "hdgst": false, 00:22:41.845 "ddgst": false 00:22:41.845 }, 00:22:41.845 "method": "bdev_nvme_attach_controller" 00:22:41.845 }' 00:22:41.845 [2024-10-01 15:56:51.891133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.845 [2024-10-01 15:56:51.964201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.748 Running I/O for 10 seconds... 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:43.748 15:56:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:44.007 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:44.007 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:44.007 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:44.007 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:44.007 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.007 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.007 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.007 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:44.007 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:44.007 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2497135 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2497135 ']' 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2497135 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2497135 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2497135' 00:22:44.265 killing process with pid 2497135 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2497135 00:22:44.265 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2497135 00:22:44.524 Received shutdown signal, test time was about 0.973616 seconds 00:22:44.524 00:22:44.524 Latency(us) 00:22:44.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.524 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.524 Verification LBA range: start 0x0 length 0x400 00:22:44.524 Nvme1n1 : 0.92 283.37 17.71 0.00 0.00 223287.27 2777.48 218702.99 00:22:44.524 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.524 Verification LBA range: start 0x0 length 0x400 00:22:44.524 Nvme2n1 : 0.93 276.03 17.25 0.00 0.00 225640.11 21096.35 211712.49 00:22:44.524 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.524 Verification LBA range: start 0x0 length 0x400 00:22:44.524 Nvme3n1 : 0.91 285.91 17.87 0.00 0.00 213504.05 2980.33 209715.20 00:22:44.524 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.524 Verification LBA range: start 0x0 length 0x400 00:22:44.524 Nvme4n1 : 0.93 336.53 21.03 0.00 0.00 177196.16 12857.54 212711.13 00:22:44.524 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.524 Verification LBA range: start 0x0 length 0x400 00:22:44.524 Nvme5n1 : 0.90 293.47 18.34 0.00 0.00 199799.89 4462.69 215707.06 00:22:44.524 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.524 Verification LBA range: start 0x0 length 0x400 00:22:44.524 Nvme6n1 : 0.91 285.28 17.83 0.00 0.00 201490.79 7552.24 211712.49 00:22:44.524 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.524 Verification LBA range: start 0x0 length 0x400 00:22:44.524 Nvme7n1 : 0.92 281.69 17.61 0.00 0.00 201762.42 1732.02 198730.12 00:22:44.524 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.524 Verification LBA range: start 0x0 length 0x400 00:22:44.524 Nvme8n1 : 0.93 276.61 17.29 0.00 0.00 202117.97 13419.28 219701.64 00:22:44.524 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.524 Verification LBA range: start 0x0 length 0x400 00:22:44.524 Nvme9n1 : 0.89 216.13 13.51 0.00 0.00 252207.30 18350.08 220700.28 00:22:44.524 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.524 Verification LBA range: start 0x0 length 0x400 00:22:44.524 Nvme10n1 : 0.97 263.11 16.44 0.00 0.00 196195.23 8738.13 235679.94 00:22:44.524 =================================================================================================================== 00:22:44.524 Total : 2798.14 174.88 0.00 0.00 207516.21 1732.02 235679.94 00:22:44.783 15:56:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2496856 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:45.721 rmmod nvme_tcp 00:22:45.721 rmmod nvme_fabrics 00:22:45.721 rmmod nvme_keyring 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 2496856 ']' 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 2496856 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2496856 ']' 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2496856 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2496856 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2496856' 00:22:45.721 killing process with pid 2496856 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2496856 00:22:45.721 15:56:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2496856 00:22:46.290 15:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:46.290 15:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:46.290 15:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:46.290 15:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:46.290 15:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:22:46.290 15:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:46.290 15:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:22:46.290 15:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:46.290 15:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:46.290 15:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.290 15:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.290 15:56:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.198 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:48.198 00:22:48.198 real 0m8.354s 00:22:48.198 user 0m25.836s 00:22:48.198 sys 0m1.416s 00:22:48.198 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:48.198 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.198 ************************************ 00:22:48.198 END TEST nvmf_shutdown_tc2 00:22:48.198 ************************************ 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:48.458 ************************************ 00:22:48.458 START TEST nvmf_shutdown_tc3 00:22:48.458 ************************************ 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:48.458 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:48.458 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:48.458 Found net devices under 0000:86:00.0: cvl_0_0 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:48.458 Found net devices under 0000:86:00.1: cvl_0_1 00:22:48.458 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.459 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.718 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.718 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.718 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:48.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:22:48.719 00:22:48.719 --- 10.0.0.2 ping statistics --- 00:22:48.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.719 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:22:48.719 00:22:48.719 --- 10.0.0.1 ping statistics --- 00:22:48.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.719 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=2498372 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 2498372 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2498372 ']' 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:48.719 15:56:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.719 [2024-10-01 15:56:58.816898] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:22:48.719 [2024-10-01 15:56:58.816951] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.719 [2024-10-01 15:56:58.888158] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.980 [2024-10-01 15:56:58.961029] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.980 [2024-10-01 15:56:58.961070] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.980 [2024-10-01 15:56:58.961076] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.980 [2024-10-01 15:56:58.961082] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.980 [2024-10-01 15:56:58.961087] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.980 [2024-10-01 15:56:58.961200] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.980 [2024-10-01 15:56:58.961310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.980 [2024-10-01 15:56:58.961396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.980 [2024-10-01 15:56:58.961397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.549 [2024-10-01 15:56:59.680835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.549 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.550 15:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.809 Malloc1 00:22:49.809 [2024-10-01 15:56:59.776417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.809 Malloc2 00:22:49.809 Malloc3 00:22:49.809 Malloc4 00:22:49.809 Malloc5 00:22:49.809 Malloc6 00:22:50.069 Malloc7 00:22:50.069 Malloc8 00:22:50.069 Malloc9 00:22:50.069 Malloc10 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2498689 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2498689 /var/tmp/bdevperf.sock 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2498689 ']' 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:50.069 { 00:22:50.069 "params": { 00:22:50.069 "name": "Nvme$subsystem", 00:22:50.069 "trtype": "$TEST_TRANSPORT", 00:22:50.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.069 "adrfam": "ipv4", 00:22:50.069 "trsvcid": "$NVMF_PORT", 00:22:50.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.069 "hdgst": ${hdgst:-false}, 00:22:50.069 "ddgst": ${ddgst:-false} 00:22:50.069 }, 00:22:50.069 "method": "bdev_nvme_attach_controller" 00:22:50.069 } 00:22:50.069 EOF 00:22:50.069 )") 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:50.069 { 00:22:50.069 "params": { 00:22:50.069 "name": "Nvme$subsystem", 00:22:50.069 "trtype": "$TEST_TRANSPORT", 00:22:50.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.069 "adrfam": "ipv4", 00:22:50.069 "trsvcid": "$NVMF_PORT", 00:22:50.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.069 "hdgst": ${hdgst:-false}, 00:22:50.069 "ddgst": ${ddgst:-false} 00:22:50.069 }, 00:22:50.069 "method": "bdev_nvme_attach_controller" 00:22:50.069 } 00:22:50.069 EOF 00:22:50.069 )") 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:50.069 { 00:22:50.069 "params": { 00:22:50.069 "name": "Nvme$subsystem", 00:22:50.069 "trtype": "$TEST_TRANSPORT", 00:22:50.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.069 "adrfam": "ipv4", 00:22:50.069 "trsvcid": "$NVMF_PORT", 00:22:50.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.069 "hdgst": ${hdgst:-false}, 00:22:50.069 "ddgst": ${ddgst:-false} 00:22:50.069 }, 00:22:50.069 "method": "bdev_nvme_attach_controller" 00:22:50.069 } 00:22:50.069 EOF 00:22:50.069 )") 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:50.069 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:50.069 { 00:22:50.069 "params": { 00:22:50.069 "name": "Nvme$subsystem", 00:22:50.069 "trtype": "$TEST_TRANSPORT", 00:22:50.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.070 "adrfam": "ipv4", 00:22:50.070 "trsvcid": "$NVMF_PORT", 00:22:50.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.070 "hdgst": ${hdgst:-false}, 00:22:50.070 "ddgst": ${ddgst:-false} 00:22:50.070 }, 00:22:50.070 "method": "bdev_nvme_attach_controller" 00:22:50.070 } 00:22:50.070 EOF 00:22:50.070 )") 00:22:50.070 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:50.070 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:50.070 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:50.070 { 00:22:50.070 "params": { 00:22:50.070 "name": "Nvme$subsystem", 00:22:50.070 "trtype": "$TEST_TRANSPORT", 00:22:50.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.070 "adrfam": "ipv4", 00:22:50.070 "trsvcid": "$NVMF_PORT", 00:22:50.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.070 "hdgst": ${hdgst:-false}, 00:22:50.070 "ddgst": ${ddgst:-false} 00:22:50.070 }, 00:22:50.070 "method": "bdev_nvme_attach_controller" 00:22:50.070 } 00:22:50.070 EOF 00:22:50.070 )") 00:22:50.070 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:50.070 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:50.070 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:50.070 { 00:22:50.070 "params": { 00:22:50.070 "name": "Nvme$subsystem", 00:22:50.070 "trtype": "$TEST_TRANSPORT", 00:22:50.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.070 "adrfam": "ipv4", 00:22:50.070 "trsvcid": "$NVMF_PORT", 00:22:50.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.070 "hdgst": ${hdgst:-false}, 00:22:50.070 "ddgst": ${ddgst:-false} 00:22:50.070 }, 00:22:50.070 "method": "bdev_nvme_attach_controller" 00:22:50.070 } 00:22:50.070 EOF 00:22:50.070 )") 00:22:50.070 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:50.070 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:50.070 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:50.070 { 00:22:50.070 "params": { 00:22:50.070 "name": "Nvme$subsystem", 00:22:50.070 "trtype": "$TEST_TRANSPORT", 00:22:50.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.070 "adrfam": "ipv4", 00:22:50.070 "trsvcid": "$NVMF_PORT", 00:22:50.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.070 "hdgst": ${hdgst:-false}, 00:22:50.070 "ddgst": ${ddgst:-false} 00:22:50.070 }, 00:22:50.070 "method": "bdev_nvme_attach_controller" 00:22:50.070 } 00:22:50.070 EOF 00:22:50.070 )") 00:22:50.070 [2024-10-01 15:57:00.248612] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:22:50.070 [2024-10-01 15:57:00.248664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498689 ] 00:22:50.070 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:50.070 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:50.070 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:50.070 { 00:22:50.070 "params": { 00:22:50.070 "name": "Nvme$subsystem", 00:22:50.070 "trtype": "$TEST_TRANSPORT", 00:22:50.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.070 "adrfam": "ipv4", 00:22:50.070 "trsvcid": "$NVMF_PORT", 00:22:50.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.070 "hdgst": ${hdgst:-false}, 00:22:50.070 "ddgst": ${ddgst:-false} 00:22:50.070 }, 00:22:50.070 "method": "bdev_nvme_attach_controller" 00:22:50.070 } 00:22:50.070 EOF 00:22:50.070 )") 00:22:50.070 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:50.329 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:50.329 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:50.329 { 00:22:50.329 "params": { 00:22:50.329 "name": "Nvme$subsystem", 00:22:50.329 "trtype": "$TEST_TRANSPORT", 00:22:50.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.329 "adrfam": "ipv4", 00:22:50.329 "trsvcid": "$NVMF_PORT", 00:22:50.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.329 "hdgst": ${hdgst:-false}, 00:22:50.329 "ddgst": ${ddgst:-false} 00:22:50.329 }, 00:22:50.329 "method": "bdev_nvme_attach_controller" 00:22:50.329 } 00:22:50.329 EOF 00:22:50.329 )") 00:22:50.329 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:50.329 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:50.329 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:50.329 { 00:22:50.329 "params": { 00:22:50.329 "name": "Nvme$subsystem", 00:22:50.329 "trtype": "$TEST_TRANSPORT", 00:22:50.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.329 "adrfam": "ipv4", 00:22:50.329 "trsvcid": "$NVMF_PORT", 00:22:50.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.329 "hdgst": ${hdgst:-false}, 00:22:50.329 "ddgst": ${ddgst:-false} 00:22:50.329 }, 00:22:50.329 "method": "bdev_nvme_attach_controller" 00:22:50.329 } 00:22:50.329 EOF 00:22:50.329 )") 00:22:50.329 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:50.329 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:22:50.329 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:22:50.329 15:57:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:22:50.329 "params": { 00:22:50.329 "name": "Nvme1", 00:22:50.329 "trtype": "tcp", 00:22:50.329 "traddr": "10.0.0.2", 00:22:50.329 "adrfam": "ipv4", 00:22:50.329 "trsvcid": "4420", 00:22:50.329 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.329 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.329 "hdgst": false, 00:22:50.329 "ddgst": false 00:22:50.329 }, 00:22:50.329 "method": "bdev_nvme_attach_controller" 00:22:50.329 },{ 00:22:50.329 "params": { 00:22:50.329 "name": "Nvme2", 00:22:50.329 "trtype": "tcp", 00:22:50.329 "traddr": "10.0.0.2", 00:22:50.329 "adrfam": "ipv4", 00:22:50.329 "trsvcid": "4420", 00:22:50.329 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:50.329 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:50.329 "hdgst": false, 00:22:50.329 "ddgst": false 00:22:50.329 }, 00:22:50.329 "method": "bdev_nvme_attach_controller" 00:22:50.329 },{ 00:22:50.329 "params": { 00:22:50.329 "name": "Nvme3", 00:22:50.329 "trtype": "tcp", 00:22:50.329 "traddr": "10.0.0.2", 00:22:50.329 "adrfam": "ipv4", 00:22:50.329 "trsvcid": "4420", 00:22:50.329 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:50.329 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:50.329 "hdgst": false, 00:22:50.329 "ddgst": false 00:22:50.329 }, 00:22:50.329 "method": "bdev_nvme_attach_controller" 00:22:50.329 },{ 00:22:50.329 "params": { 00:22:50.329 "name": "Nvme4", 00:22:50.329 "trtype": "tcp", 00:22:50.329 "traddr": "10.0.0.2", 00:22:50.329 "adrfam": "ipv4", 00:22:50.329 "trsvcid": "4420", 00:22:50.329 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:50.329 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:50.329 "hdgst": false, 00:22:50.329 "ddgst": false 00:22:50.329 }, 00:22:50.329 "method": "bdev_nvme_attach_controller" 00:22:50.329 },{ 00:22:50.329 "params": { 00:22:50.329 "name": "Nvme5", 00:22:50.329 "trtype": "tcp", 00:22:50.329 "traddr": "10.0.0.2", 00:22:50.329 "adrfam": "ipv4", 00:22:50.329 "trsvcid": "4420", 00:22:50.329 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:50.329 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:50.329 "hdgst": false, 00:22:50.329 "ddgst": false 00:22:50.329 }, 00:22:50.329 "method": "bdev_nvme_attach_controller" 00:22:50.329 },{ 00:22:50.329 "params": { 00:22:50.329 "name": "Nvme6", 00:22:50.329 "trtype": "tcp", 00:22:50.329 "traddr": "10.0.0.2", 00:22:50.329 "adrfam": "ipv4", 00:22:50.329 "trsvcid": "4420", 00:22:50.329 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:50.329 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:50.329 "hdgst": false, 00:22:50.329 "ddgst": false 00:22:50.329 }, 00:22:50.329 "method": "bdev_nvme_attach_controller" 00:22:50.329 },{ 00:22:50.329 "params": { 00:22:50.329 "name": "Nvme7", 00:22:50.329 "trtype": "tcp", 00:22:50.329 "traddr": "10.0.0.2", 00:22:50.329 "adrfam": "ipv4", 00:22:50.329 "trsvcid": "4420", 00:22:50.329 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:50.329 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:50.329 "hdgst": false, 00:22:50.329 "ddgst": false 00:22:50.329 }, 00:22:50.329 "method": "bdev_nvme_attach_controller" 00:22:50.329 },{ 00:22:50.329 "params": { 00:22:50.329 "name": "Nvme8", 00:22:50.329 "trtype": "tcp", 00:22:50.329 "traddr": "10.0.0.2", 00:22:50.329 "adrfam": "ipv4", 00:22:50.329 "trsvcid": "4420", 00:22:50.329 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:50.329 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:50.330 "hdgst": false, 00:22:50.330 "ddgst": false 00:22:50.330 }, 00:22:50.330 "method": "bdev_nvme_attach_controller" 00:22:50.330 },{ 00:22:50.330 "params": { 00:22:50.330 "name": "Nvme9", 00:22:50.330 "trtype": "tcp", 00:22:50.330 "traddr": "10.0.0.2", 00:22:50.330 "adrfam": "ipv4", 00:22:50.330 "trsvcid": "4420", 00:22:50.330 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:50.330 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:50.330 "hdgst": false, 00:22:50.330 "ddgst": false 00:22:50.330 }, 00:22:50.330 "method": "bdev_nvme_attach_controller" 00:22:50.330 },{ 00:22:50.330 "params": { 00:22:50.330 "name": "Nvme10", 00:22:50.330 "trtype": "tcp", 00:22:50.330 "traddr": "10.0.0.2", 00:22:50.330 "adrfam": "ipv4", 00:22:50.330 "trsvcid": "4420", 00:22:50.330 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:50.330 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:50.330 "hdgst": false, 00:22:50.330 "ddgst": false 00:22:50.330 }, 00:22:50.330 "method": "bdev_nvme_attach_controller" 00:22:50.330 }' 00:22:50.330 [2024-10-01 15:57:00.321923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.330 [2024-10-01 15:57:00.395895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.232 Running I/O for 10 seconds... 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=199 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 199 -ge 100 ']' 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2498372 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2498372 ']' 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2498372 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2498372 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2498372' 00:22:52.808 killing process with pid 2498372 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2498372 00:22:52.808 15:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2498372 00:22:52.808 [2024-10-01 15:57:02.953555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.808 [2024-10-01 15:57:02.953755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.953994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.954000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.954006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47a10 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.961187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4a5c0 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.961225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4a5c0 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.964994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.965000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.965007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.965022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.965029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.965036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.965042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.965049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.965056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.809 [2024-10-01 15:57:02.965063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.965262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47f00 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.810 [2024-10-01 15:57:02.966770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.966950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d483d0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.811 [2024-10-01 15:57:02.968293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.968455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d488c0 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.812 [2024-10-01 15:57:02.969455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.969461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.969467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.969474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.969481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.969487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.969493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.969499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.969506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.969512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.969518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.969524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d48d90 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.813 [2024-10-01 15:57:02.970655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.970662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.970670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.970676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.970683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.970690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.970696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49260 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.971999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49730 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.814 [2024-10-01 15:57:02.972961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.972967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.972974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.972980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.972986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.972993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.972999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-01 15:57:02.973248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-01 15:57:02.973271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49c20 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 [2024-10-01 15:57:02.973298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 [2024-10-01 15:57:02.973313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0460 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 [2024-10-01 15:57:02.973371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 [2024-10-01 15:57:02.973384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 [2024-10-01 15:57:02.973398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 [2024-10-01 15:57:02.973411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193c280 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 [2024-10-01 15:57:02.973449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 [2024-10-01 15:57:02.973463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 [2024-10-01 15:57:02.973477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 [2024-10-01 15:57:02.973490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19311f0 is same with the state(6) to be set 00:22:52.815 [2024-10-01 15:57:02.973513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 [2024-10-01 15:57:02.973531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 [2024-10-01 15:57:02.973545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.815 [2024-10-01 15:57:02.973553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.815 [2024-10-01 15:57:02.973560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851610 is same with the state(6) to be set 00:22:52.816 [2024-10-01 15:57:02.973598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daf550 is same with the state(6) to be set 00:22:52.816 [2024-10-01 15:57:02.973682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67fa0 is same with the state(6) to be set 00:22:52.816 [2024-10-01 15:57:02.973763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1939cb0 is same with the state(6) to be set 00:22:52.816 [2024-10-01 15:57:02.973843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193be20 is same with the state(6) to be set 00:22:52.816 [2024-10-01 15:57:02.973930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.816 [2024-10-01 15:57:02.973983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.973990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6d410 is same with the state(6) to be set 00:22:52.816 [2024-10-01 15:57:02.974796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-10-01 15:57:02.974820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.974835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-10-01 15:57:02.974842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.974854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-10-01 15:57:02.974861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.974877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-10-01 15:57:02.974884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.974892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-10-01 15:57:02.974899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.974907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-10-01 15:57:02.974914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.974922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.816 [2024-10-01 15:57:02.974929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.816 [2024-10-01 15:57:02.974936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.974943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.974951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.974958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.974966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.974973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.974981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.974988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.974997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.817 [2024-10-01 15:57:02.975393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.817 [2024-10-01 15:57:02.975400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.818 [2024-10-01 15:57:02.975761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.818 [2024-10-01 15:57:02.975770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.975778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.975784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.975793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.975799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.975826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:52.819 [2024-10-01 15:57:02.975886] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e29690 was disconnected and freed. reset controller. 00:22:52.819 [2024-10-01 15:57:02.976955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.976976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.976990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.976999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.819 [2024-10-01 15:57:02.977404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.819 [2024-10-01 15:57:02.977413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.977694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.977702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.981899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.981913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.981924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.981933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.981940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.981948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.981956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.981964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.981972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.981980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.981987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.981996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.982003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.982011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.982018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.982026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.982032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.982041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.982048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.982055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.982062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.982071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.982078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.982089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.982095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.982104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.982111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.982121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.982128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.820 [2024-10-01 15:57:02.982136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.820 [2024-10-01 15:57:02.982143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.982152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.982159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.982167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.982173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.982183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.982189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.982272] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1de2160 was disconnected and freed. reset controller. 00:22:52.821 [2024-10-01 15:57:02.983335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.821 [2024-10-01 15:57:02.983855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.821 [2024-10-01 15:57:02.983870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.983890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.983899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.983906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.983914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.983922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.983930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.983939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.983948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.983956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.983966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.983974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.983983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.983990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.983998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984458] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e2ac10 was disconnected and freed. reset controller. 00:22:52.822 [2024-10-01 15:57:02.984514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da0460 (9): Bad file descriptor 00:22:52.822 [2024-10-01 15:57:02.984552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.822 [2024-10-01 15:57:02.984562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.822 [2024-10-01 15:57:02.984577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.822 [2024-10-01 15:57:02.984591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.822 [2024-10-01 15:57:02.984606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3550 is same with the state(6) to be set 00:22:52.822 [2024-10-01 15:57:02.984626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193c280 (9): Bad file descriptor 00:22:52.822 [2024-10-01 15:57:02.984643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19311f0 (9): Bad file descriptor 00:22:52.822 [2024-10-01 15:57:02.984660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1851610 (9): Bad file descriptor 00:22:52.822 [2024-10-01 15:57:02.984676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1daf550 (9): Bad file descriptor 00:22:52.822 [2024-10-01 15:57:02.984688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d67fa0 (9): Bad file descriptor 00:22:52.822 [2024-10-01 15:57:02.984704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1939cb0 (9): Bad file descriptor 00:22:52.822 [2024-10-01 15:57:02.984718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193be20 (9): Bad file descriptor 00:22:52.822 [2024-10-01 15:57:02.984733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6d410 (9): Bad file descriptor 00:22:52.822 [2024-10-01 15:57:02.984767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.984988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.984998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.985005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.985014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.985021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.985031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.985038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.985047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.985054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.985062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.985069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.985078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.985084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.985093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.985100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.985108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.985116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.985125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.985131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.822 [2024-10-01 15:57:02.985142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.822 [2024-10-01 15:57:02.985148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.985820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.985827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b40490 is same with the state(6) to be set 00:22:52.823 [2024-10-01 15:57:02.985887] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b40490 was disconnected and freed. reset controller. 00:22:52.823 [2024-10-01 15:57:02.988020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:52.823 [2024-10-01 15:57:02.989917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:52.823 [2024-10-01 15:57:02.989945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:52.823 [2024-10-01 15:57:02.989955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.823 [2024-10-01 15:57:02.989980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da3550 (9): Bad file descriptor 00:22:52.823 [2024-10-01 15:57:02.990253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.823 [2024-10-01 15:57:02.990270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6d410 with addr=10.0.0.2, port=4420 00:22:52.823 [2024-10-01 15:57:02.990279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6d410 is same with the state(6) to be set 00:22:52.823 [2024-10-01 15:57:02.990345] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:52.823 [2024-10-01 15:57:02.990476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.990489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.990502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.990510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.990520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.990528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.990537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.990549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.990559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.990566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.990576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.823 [2024-10-01 15:57:02.990584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.823 [2024-10-01 15:57:02.990594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de0d20 is same with the state(6) to be set 00:22:52.824 [2024-10-01 15:57:02.990651] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1de0d20 was disconnected and freed. reset controller. 00:22:52.824 [2024-10-01 15:57:02.990696] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:52.824 [2024-10-01 15:57:02.991020] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:52.824 [2024-10-01 15:57:02.991066] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.086 [2024-10-01 15:57:02.991465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.086 [2024-10-01 15:57:02.991483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1daf550 with addr=10.0.0.2, port=4420 00:22:53.087 [2024-10-01 15:57:02.991492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daf550 is same with the state(6) to be set 00:22:53.087 [2024-10-01 15:57:02.991657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.087 [2024-10-01 15:57:02.991670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x193c280 with addr=10.0.0.2, port=4420 00:22:53.087 [2024-10-01 15:57:02.991678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193c280 is same with the state(6) to be set 00:22:53.087 [2024-10-01 15:57:02.991696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6d410 (9): Bad file descriptor 00:22:53.087 [2024-10-01 15:57:02.992698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:53.087 [2024-10-01 15:57:02.992819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.087 [2024-10-01 15:57:02.992834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da3550 with addr=10.0.0.2, port=4420 00:22:53.087 [2024-10-01 15:57:02.992843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3550 is same with the state(6) to be set 00:22:53.087 [2024-10-01 15:57:02.992853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1daf550 (9): Bad file descriptor 00:22:53.087 [2024-10-01 15:57:02.992873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193c280 (9): Bad file descriptor 00:22:53.087 [2024-10-01 15:57:02.992882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:53.087 [2024-10-01 15:57:02.992890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:53.087 [2024-10-01 15:57:02.992900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:53.087 [2024-10-01 15:57:02.992958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.992970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.992985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.992993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.087 [2024-10-01 15:57:02.993455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.087 [2024-10-01 15:57:02.993463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.993988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.993996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.994004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.994012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.994021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.994036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.994044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.088 [2024-10-01 15:57:02.994051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.088 [2024-10-01 15:57:02.994061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e25730 is same with the state(6) to be set 00:22:53.088 [2024-10-01 15:57:02.994115] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e25730 was disconnected and freed. reset controller. 00:22:53.088 [2024-10-01 15:57:02.994154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.088 [2024-10-01 15:57:02.994251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.088 [2024-10-01 15:57:02.994263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1939cb0 with addr=10.0.0.2, port=4420 00:22:53.088 [2024-10-01 15:57:02.994271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1939cb0 is same with the state(6) to be set 00:22:53.088 [2024-10-01 15:57:02.994280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da3550 (9): Bad file descriptor 00:22:53.088 [2024-10-01 15:57:02.994289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:53.088 [2024-10-01 15:57:02.994297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:53.088 [2024-10-01 15:57:02.994304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:53.088 [2024-10-01 15:57:02.994315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:53.089 [2024-10-01 15:57:02.994323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:53.089 [2024-10-01 15:57:02.994330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.089 [2024-10-01 15:57:02.995483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.089 [2024-10-01 15:57:02.995498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.089 [2024-10-01 15:57:02.995504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:53.089 [2024-10-01 15:57:02.995525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1939cb0 (9): Bad file descriptor 00:22:53.089 [2024-10-01 15:57:02.995536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:53.089 [2024-10-01 15:57:02.995543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:53.089 [2024-10-01 15:57:02.995550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:53.089 [2024-10-01 15:57:02.995618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.089 [2024-10-01 15:57:02.995735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.089 [2024-10-01 15:57:02.995749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19311f0 with addr=10.0.0.2, port=4420 00:22:53.089 [2024-10-01 15:57:02.995757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19311f0 is same with the state(6) to be set 00:22:53.089 [2024-10-01 15:57:02.995765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:53.089 [2024-10-01 15:57:02.995772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:53.089 [2024-10-01 15:57:02.995780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:53.089 [2024-10-01 15:57:02.995820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.995830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.995845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.995853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.995868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.995876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.995886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.995893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.995903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.995910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.995919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.995926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.995936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.995943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.995952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.995960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.995969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.995977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.995985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.995992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.089 [2024-10-01 15:57:02.996337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.089 [2024-10-01 15:57:02.996345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.996873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.996883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b417a0 is same with the state(6) to be set 00:22:53.090 [2024-10-01 15:57:02.997857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.997874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.997886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.997893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.997902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.090 [2024-10-01 15:57:02.997909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.090 [2024-10-01 15:57:02.997919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.997926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.997935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.997943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.997952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.997959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.997967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.997975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.997984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.997992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.091 [2024-10-01 15:57:02.998393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.091 [2024-10-01 15:57:02.998401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:02.998902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:02.998910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d437c0 is same with the state(6) to be set 00:22:53.092 [2024-10-01 15:57:03.000099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:03.000116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:03.000126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:03.000135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:03.000144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:03.000152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:03.000161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:03.000169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:03.000178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:03.000185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:03.000194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:03.000202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:03.000211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:03.000219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:03.000228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.092 [2024-10-01 15:57:03.000234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.092 [2024-10-01 15:57:03.000244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.093 [2024-10-01 15:57:03.000895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.093 [2024-10-01 15:57:03.000902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.000911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.000917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.000927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.000934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.000943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.000949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.000958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.000965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.000974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.000982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.000990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.000997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.001005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.001013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.001021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.001028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.001036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.001043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.001051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.001058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.001066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.001073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.001084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.001090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.001099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.001106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.001115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.001121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.001130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.001138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.001146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e26c40 is same with the state(6) to be set 00:22:53.094 [2024-10-01 15:57:03.002127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.094 [2024-10-01 15:57:03.002494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.094 [2024-10-01 15:57:03.002503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.002990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.002997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.003005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.003012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.003022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.003030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.003038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.003046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.003054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.003061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.003070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.003076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.003085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.003093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.095 [2024-10-01 15:57:03.003103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.095 [2024-10-01 15:57:03.003110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.096 [2024-10-01 15:57:03.003119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.096 [2024-10-01 15:57:03.003127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.096 [2024-10-01 15:57:03.003135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.096 [2024-10-01 15:57:03.003143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.096 [2024-10-01 15:57:03.003151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.096 [2024-10-01 15:57:03.003159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.096 [2024-10-01 15:57:03.003167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.096 [2024-10-01 15:57:03.003174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.096 [2024-10-01 15:57:03.003182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e281c0 is same with the state(6) to be set 00:22:53.096 [2024-10-01 15:57:03.004128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.096 [2024-10-01 15:57:03.004146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:53.096 [2024-10-01 15:57:03.004156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:53.096 [2024-10-01 15:57:03.004165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:53.096 task offset: 26496 on job bdev=Nvme9n1 fails 00:22:53.096 00:22:53.096 Latency(us) 00:22:53.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.096 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.096 Job: Nvme1n1 ended in about 0.91 seconds with error 00:22:53.096 Verification LBA range: start 0x0 length 0x400 00:22:53.096 Nvme1n1 : 0.91 221.27 13.83 70.46 0.00 217236.88 7957.94 207717.91 00:22:53.096 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.096 Job: Nvme2n1 ended in about 0.92 seconds with error 00:22:53.096 Verification LBA range: start 0x0 length 0x400 00:22:53.096 Nvme2n1 : 0.92 209.34 13.08 69.78 0.00 223252.72 16602.45 217704.35 00:22:53.096 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.096 Job: Nvme3n1 ended in about 0.91 seconds with error 00:22:53.096 Verification LBA range: start 0x0 length 0x400 00:22:53.096 Nvme3n1 : 0.91 278.49 17.41 6.58 0.00 214219.42 19099.06 206719.27 00:22:53.096 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.096 Job: Nvme4n1 ended in about 0.92 seconds with error 00:22:53.096 Verification LBA range: start 0x0 length 0x400 00:22:53.096 Nvme4n1 : 0.92 213.23 13.33 69.63 0.00 212707.17 15166.90 212711.13 00:22:53.096 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.096 Job: Nvme5n1 ended in about 0.91 seconds with error 00:22:53.096 Verification LBA range: start 0x0 length 0x400 00:22:53.096 Nvme5n1 : 0.91 211.83 13.24 70.61 0.00 209001.57 11858.90 216705.71 00:22:53.096 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.096 Job: Nvme6n1 ended in about 0.91 seconds with error 00:22:53.096 Verification LBA range: start 0x0 length 0x400 00:22:53.096 Nvme6n1 : 0.91 209.86 13.12 69.95 0.00 207262.48 18225.25 215707.06 00:22:53.096 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.096 Job: Nvme7n1 ended in about 0.92 seconds with error 00:22:53.096 Verification LBA range: start 0x0 length 0x400 00:22:53.096 Nvme7n1 : 0.92 208.37 13.02 69.46 0.00 205020.40 16227.96 201726.05 00:22:53.096 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.096 Job: Nvme8n1 ended in about 0.92 seconds with error 00:22:53.096 Verification LBA range: start 0x0 length 0x400 00:22:53.096 Nvme8n1 : 0.92 207.91 12.99 69.30 0.00 201693.26 13981.01 220700.28 00:22:53.096 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.096 Job: Nvme9n1 ended in about 0.90 seconds with error 00:22:53.096 Verification LBA range: start 0x0 length 0x400 00:22:53.096 Nvme9n1 : 0.90 212.68 13.29 70.89 0.00 192651.82 7895.53 226692.14 00:22:53.096 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.096 Job: Nvme10n1 ended in about 0.91 seconds with error 00:22:53.096 Verification LBA range: start 0x0 length 0x400 00:22:53.096 Nvme10n1 : 0.91 211.59 13.22 70.53 0.00 189951.02 10485.76 233682.65 00:22:53.096 =================================================================================================================== 00:22:53.096 Total : 2184.58 136.54 637.19 0.00 207353.51 7895.53 233682.65 00:22:53.096 [2024-10-01 15:57:03.034617] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:53.096 [2024-10-01 15:57:03.034710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19311f0 (9): Bad file descriptor 00:22:53.096 [2024-10-01 15:57:03.034857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:53.096 [2024-10-01 15:57:03.035102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.096 [2024-10-01 15:57:03.035123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x193be20 with addr=10.0.0.2, port=4420 00:22:53.096 [2024-10-01 15:57:03.035135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193be20 is same with the state(6) to be set 00:22:53.096 [2024-10-01 15:57:03.035243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.096 [2024-10-01 15:57:03.035256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d67fa0 with addr=10.0.0.2, port=4420 00:22:53.096 [2024-10-01 15:57:03.035264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67fa0 is same with the state(6) to be set 00:22:53.096 [2024-10-01 15:57:03.035418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.096 [2024-10-01 15:57:03.035431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1851610 with addr=10.0.0.2, port=4420 00:22:53.096 [2024-10-01 15:57:03.035439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851610 is same with the state(6) to be set 00:22:53.096 [2024-10-01 15:57:03.035447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:53.096 [2024-10-01 15:57:03.035455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:53.096 [2024-10-01 15:57:03.035465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:53.096 [2024-10-01 15:57:03.035486] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.096 [2024-10-01 15:57:03.035507] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.096 [2024-10-01 15:57:03.035526] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.096 [2024-10-01 15:57:03.035536] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.096 [2024-10-01 15:57:03.035548] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.096 [2024-10-01 15:57:03.035558] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.096 [2024-10-01 15:57:03.036451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:53.096 [2024-10-01 15:57:03.036467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:53.096 [2024-10-01 15:57:03.036477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:53.096 [2024-10-01 15:57:03.036485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:53.096 [2024-10-01 15:57:03.036493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:53.096 [2024-10-01 15:57:03.036514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.096 [2024-10-01 15:57:03.036651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.097 [2024-10-01 15:57:03.036666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da0460 with addr=10.0.0.2, port=4420 00:22:53.097 [2024-10-01 15:57:03.036675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0460 is same with the state(6) to be set 00:22:53.097 [2024-10-01 15:57:03.036685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193be20 (9): Bad file descriptor 00:22:53.097 [2024-10-01 15:57:03.036695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d67fa0 (9): Bad file descriptor 00:22:53.097 [2024-10-01 15:57:03.036705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1851610 (9): Bad file descriptor 00:22:53.097 [2024-10-01 15:57:03.037595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.097 [2024-10-01 15:57:03.037617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6d410 with addr=10.0.0.2, port=4420 00:22:53.097 [2024-10-01 15:57:03.037627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6d410 is same with the state(6) to be set 00:22:53.097 [2024-10-01 15:57:03.037850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.097 [2024-10-01 15:57:03.037868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x193c280 with addr=10.0.0.2, port=4420 00:22:53.097 [2024-10-01 15:57:03.037877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193c280 is same with the state(6) to be set 00:22:53.097 [2024-10-01 15:57:03.038016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.097 [2024-10-01 15:57:03.038028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1daf550 with addr=10.0.0.2, port=4420 00:22:53.097 [2024-10-01 15:57:03.038036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daf550 is same with the state(6) to be set 00:22:53.097 [2024-10-01 15:57:03.038168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.097 [2024-10-01 15:57:03.038180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da3550 with addr=10.0.0.2, port=4420 00:22:53.097 [2024-10-01 15:57:03.038188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3550 is same with the state(6) to be set 00:22:53.097 [2024-10-01 15:57:03.038332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.097 [2024-10-01 15:57:03.038344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1939cb0 with addr=10.0.0.2, port=4420 00:22:53.097 [2024-10-01 15:57:03.038356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1939cb0 is same with the state(6) to be set 00:22:53.097 [2024-10-01 15:57:03.038367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da0460 (9): Bad file descriptor 00:22:53.097 [2024-10-01 15:57:03.038376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:53.097 [2024-10-01 15:57:03.038384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:53.097 [2024-10-01 15:57:03.038393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:53.097 [2024-10-01 15:57:03.038404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:53.097 [2024-10-01 15:57:03.038412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:53.097 [2024-10-01 15:57:03.038420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:53.097 [2024-10-01 15:57:03.038430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:53.097 [2024-10-01 15:57:03.038437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:53.097 [2024-10-01 15:57:03.038443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:53.097 [2024-10-01 15:57:03.038497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.097 [2024-10-01 15:57:03.038506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.097 [2024-10-01 15:57:03.038512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.097 [2024-10-01 15:57:03.038521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6d410 (9): Bad file descriptor 00:22:53.097 [2024-10-01 15:57:03.038529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193c280 (9): Bad file descriptor 00:22:53.097 [2024-10-01 15:57:03.038538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1daf550 (9): Bad file descriptor 00:22:53.097 [2024-10-01 15:57:03.038547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da3550 (9): Bad file descriptor 00:22:53.097 [2024-10-01 15:57:03.038556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1939cb0 (9): Bad file descriptor 00:22:53.097 [2024-10-01 15:57:03.038563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:53.097 [2024-10-01 15:57:03.038570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:53.097 [2024-10-01 15:57:03.038576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:53.097 [2024-10-01 15:57:03.038602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.097 [2024-10-01 15:57:03.038612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:53.097 [2024-10-01 15:57:03.038618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:53.097 [2024-10-01 15:57:03.038624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:53.097 [2024-10-01 15:57:03.038632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:53.097 [2024-10-01 15:57:03.038639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:53.097 [2024-10-01 15:57:03.038645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.097 [2024-10-01 15:57:03.038655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:53.097 [2024-10-01 15:57:03.038664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:53.097 [2024-10-01 15:57:03.038671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:53.097 [2024-10-01 15:57:03.038679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:53.097 [2024-10-01 15:57:03.038685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:53.097 [2024-10-01 15:57:03.038692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:53.097 [2024-10-01 15:57:03.038700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:53.097 [2024-10-01 15:57:03.038707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:53.097 [2024-10-01 15:57:03.038713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:53.097 [2024-10-01 15:57:03.038735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.097 [2024-10-01 15:57:03.038743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.097 [2024-10-01 15:57:03.038749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.097 [2024-10-01 15:57:03.038755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.097 [2024-10-01 15:57:03.038761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.356 15:57:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2498689 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2498689 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 2498689 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:54.291 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.292 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.292 rmmod nvme_tcp 00:22:54.292 rmmod nvme_fabrics 00:22:54.292 rmmod nvme_keyring 00:22:54.292 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n 2498372 ']' 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # killprocess 2498372 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2498372 ']' 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2498372 00:22:54.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2498372) - No such process 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 2498372 is not found' 00:22:54.550 Process with pid 2498372 is not found 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.550 15:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:56.490 00:22:56.490 real 0m8.129s 00:22:56.490 user 0m20.591s 00:22:56.490 sys 0m1.460s 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.490 ************************************ 00:22:56.490 END TEST nvmf_shutdown_tc3 00:22:56.490 ************************************ 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:56.490 ************************************ 00:22:56.490 START TEST nvmf_shutdown_tc4 00:22:56.490 ************************************ 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:56.490 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:56.491 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:56.491 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:56.491 Found net devices under 0000:86:00.0: cvl_0_0 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:56.491 Found net devices under 0000:86:00.1: cvl_0_1 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:56.491 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:56.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:22:56.751 00:22:56.751 --- 10.0.0.2 ping statistics --- 00:22:56.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.751 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:22:56.751 00:22:56.751 --- 10.0.0.1 ping statistics --- 00:22:56.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.751 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:56.751 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:57.009 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:57.009 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:57.009 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:57.009 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.009 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=2499856 00:22:57.009 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 2499856 00:22:57.009 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:57.009 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 2499856 ']' 00:22:57.009 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.009 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.009 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.009 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.009 15:57:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.009 [2024-10-01 15:57:07.015415] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:22:57.009 [2024-10-01 15:57:07.015460] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.009 [2024-10-01 15:57:07.088813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.009 [2024-10-01 15:57:07.163369] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.009 [2024-10-01 15:57:07.163411] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.009 [2024-10-01 15:57:07.163418] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.009 [2024-10-01 15:57:07.163425] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.009 [2024-10-01 15:57:07.163430] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.009 [2024-10-01 15:57:07.163552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.009 [2024-10-01 15:57:07.163661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.009 [2024-10-01 15:57:07.163769] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.009 [2024-10-01 15:57:07.163770] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.064 [2024-10-01 15:57:07.887325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.064 15:57:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.064 Malloc1 00:22:58.064 [2024-10-01 15:57:07.978826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.064 Malloc2 00:22:58.064 Malloc3 00:22:58.064 Malloc4 00:22:58.064 Malloc5 00:22:58.064 Malloc6 00:22:58.064 Malloc7 00:22:58.332 Malloc8 00:22:58.332 Malloc9 00:22:58.332 Malloc10 00:22:58.332 15:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.332 15:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:58.332 15:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.332 15:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.332 15:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2500201 00:22:58.332 15:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:58.332 15:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:58.332 [2024-10-01 15:57:08.480036] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:03.602 15:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:03.602 15:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2499856 00:23:03.602 15:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 2499856 ']' 00:23:03.602 15:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 2499856 00:23:03.602 15:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:23:03.602 15:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:03.602 15:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2499856 00:23:03.602 15:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:03.602 15:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:03.602 15:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2499856' 00:23:03.602 killing process with pid 2499856 00:23:03.602 15:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 2499856 00:23:03.602 15:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 2499856 00:23:03.602 [2024-10-01 15:57:13.485651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd565b0 is same with the state(6) to be set 00:23:03.602 [2024-10-01 15:57:13.485702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd565b0 is same with the state(6) to be set 00:23:03.602 [2024-10-01 15:57:13.485710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd565b0 is same with the state(6) to be set 00:23:03.602 [2024-10-01 15:57:13.485716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd565b0 is same with the state(6) to be set 00:23:03.602 [2024-10-01 15:57:13.485722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd565b0 is same with the state(6) to be set 00:23:03.602 [2024-10-01 15:57:13.486041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd56a80 is same with the state(6) to be set 00:23:03.602 [2024-10-01 15:57:13.486078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd56a80 is same with the state(6) to be set 00:23:03.602 [2024-10-01 15:57:13.486086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd56a80 is same with the state(6) to be set 00:23:03.602 [2024-10-01 15:57:13.486099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd56a80 is same with the state(6) to be set 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 starting I/O failed: -6 00:23:03.602 [2024-10-01 15:57:13.486543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd56f50 is same with the state(6) to be set 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 [2024-10-01 15:57:13.486566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd56f50 is same with the state(6) to be set 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 starting I/O failed: -6 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 starting I/O failed: -6 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 starting I/O failed: -6 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 starting I/O failed: -6 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 starting I/O failed: -6 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 [2024-10-01 15:57:13.486896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd560e0 is same with tstarting I/O failed: -6 00:23:03.602 he state(6) to be set 00:23:03.602 [2024-10-01 15:57:13.486920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd560e0 is same with the state(6) to be set 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 [2024-10-01 15:57:13.486928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd560e0 is same with the state(6) to be set 00:23:03.602 [2024-10-01 15:57:13.486936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd560e0 is same with the state(6) to be set 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 [2024-10-01 15:57:13.486942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd560e0 is same with the state(6) to be set 00:23:03.602 [2024-10-01 15:57:13.486949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd560e0 is same with the state(6) to be set 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 [2024-10-01 15:57:13.486956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd560e0 is same with the state(6) to be set 00:23:03.602 [2024-10-01 15:57:13.486962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd560e0 is same with the state(6) to be set 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 [2024-10-01 15:57:13.486969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd560e0 is same with tstarting I/O failed: -6 00:23:03.602 he state(6) to be set 00:23:03.602 [2024-10-01 15:57:13.486976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd560e0 is same with the state(6) to be set 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 starting I/O failed: -6 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 Write completed with error (sct=0, sc=8) 00:23:03.602 [2024-10-01 15:57:13.487112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 [2024-10-01 15:57:13.487695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8e50 is same with tWrite completed with error (sct=0, sc=8) 00:23:03.603 he state(6) to be set 00:23:03.603 [2024-10-01 15:57:13.487715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8e50 is same with the state(6) to be set 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 [2024-10-01 15:57:13.487723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8e50 is same with the state(6) to be set 00:23:03.603 starting I/O failed: -6 00:23:03.603 [2024-10-01 15:57:13.487729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8e50 is same with the state(6) to be set 00:23:03.603 [2024-10-01 15:57:13.487736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8e50 is same with the state(6) to be set 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 [2024-10-01 15:57:13.487742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8e50 is same with the state(6) to be set 00:23:03.603 [2024-10-01 15:57:13.487749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8e50 is same with the state(6) to be set 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 [2024-10-01 15:57:13.487756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8e50 is same with the state(6) to be set 00:23:03.603 starting I/O failed: -6 00:23:03.603 [2024-10-01 15:57:13.487764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8e50 is same with the state(6) to be set 00:23:03.603 [2024-10-01 15:57:13.487771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8e50 is same with the state(6) to be set 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 [2024-10-01 15:57:13.488037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:03.603 starting I/O failed: -6 00:23:03.603 starting I/O failed: -6 00:23:03.603 starting I/O failed: -6 00:23:03.603 starting I/O failed: -6 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 [2024-10-01 15:57:13.488641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8980 is same with the state(6) to be set 00:23:03.603 starting I/O failed: -6 00:23:03.603 [2024-10-01 15:57:13.488663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8980 is same with the state(6) to be set 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 [2024-10-01 15:57:13.488671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8980 is same with the state(6) to be set 00:23:03.603 starting I/O failed: -6 00:23:03.603 [2024-10-01 15:57:13.488677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8980 is same with the state(6) to be set 00:23:03.603 [2024-10-01 15:57:13.488685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8980 is same with the state(6) to be set 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 [2024-10-01 15:57:13.488691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8980 is same with the state(6) to be set 00:23:03.603 [2024-10-01 15:57:13.488698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8980 is same with the state(6) to be set 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 [2024-10-01 15:57:13.488705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8980 is same with the state(6) to be set 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.603 Write completed with error (sct=0, sc=8) 00:23:03.603 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 [2024-10-01 15:57:13.489213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd578f0 is same with the state(6) to be set 00:23:03.604 [2024-10-01 15:57:13.489228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd578f0 is same with the state(6) to be set 00:23:03.604 [2024-10-01 15:57:13.489227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:03.604 [2024-10-01 15:57:13.489236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd578f0 is same with the state(6) to be set 00:23:03.604 [2024-10-01 15:57:13.489248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd578f0 is same with the state(6) to be set 00:23:03.604 [2024-10-01 15:57:13.489254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd578f0 is same with the state(6) to be set 00:23:03.604 [2024-10-01 15:57:13.489261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd578f0 is same with the state(6) to be set 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 [2024-10-01 15:57:13.489646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57dc0 is same with the state(6) to be set 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 [2024-10-01 15:57:13.489668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57dc0 is same with the state(6) to be set 00:23:03.604 [2024-10-01 15:57:13.489675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57dc0 is same with the state(6) to be set 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 [2024-10-01 15:57:13.489683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57dc0 is same with the state(6) to be set 00:23:03.604 starting I/O failed: -6 00:23:03.604 [2024-10-01 15:57:13.489690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57dc0 is same with the state(6) to be set 00:23:03.604 [2024-10-01 15:57:13.489697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57dc0 is same with the state(6) to be set 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 [2024-10-01 15:57:13.489703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57dc0 is same with the state(6) to be set 00:23:03.604 starting I/O failed: -6 00:23:03.604 [2024-10-01 15:57:13.489710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57dc0 is same with the state(6) to be set 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 [2024-10-01 15:57:13.490160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58290 is same with the state(6) to be set 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 [2024-10-01 15:57:13.490182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58290 is same with the state(6) to be set 00:23:03.604 starting I/O failed: -6 00:23:03.604 [2024-10-01 15:57:13.490190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58290 is same with the state(6) to be set 00:23:03.604 [2024-10-01 15:57:13.490198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58290 is same with the state(6) to be set 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 [2024-10-01 15:57:13.490204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58290 is same with the state(6) to be set 00:23:03.604 starting I/O failed: -6 00:23:03.604 [2024-10-01 15:57:13.490212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58290 is same with the state(6) to be set 00:23:03.604 [2024-10-01 15:57:13.490218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd58290 is same with the state(6) to be set 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.604 starting I/O failed: -6 00:23:03.604 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 [2024-10-01 15:57:13.490496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57420 is same with the state(6) to be set 00:23:03.605 [2024-10-01 15:57:13.490510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57420 is same with the state(6) to be set 00:23:03.605 [2024-10-01 15:57:13.490517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57420 is same with the state(6) to be set 00:23:03.605 [2024-10-01 15:57:13.490523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57420 is same with the state(6) to be set 00:23:03.605 [2024-10-01 15:57:13.490530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd57420 is same with the state(6) to be set 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 [2024-10-01 15:57:13.490893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:03.605 NVMe io qpair process completion error 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 [2024-10-01 15:57:13.496762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 [2024-10-01 15:57:13.497542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc591a0 is same with the state(6) to be set 00:23:03.605 [2024-10-01 15:57:13.497561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:03.605 [2024-10-01 15:57:13.497566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc591a0 is same with the state(6) to be set 00:23:03.605 [2024-10-01 15:57:13.497575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc591a0 is same with the state(6) to be set 00:23:03.605 starting I/O failed: -6 00:23:03.605 [2024-10-01 15:57:13.497586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc591a0 is same with the state(6) to be set 00:23:03.605 [2024-10-01 15:57:13.497594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc591a0 is same with the state(6) to be set 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 starting I/O failed: -6 00:23:03.605 [2024-10-01 15:57:13.497876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59690 is same with the state(6) to be set 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 [2024-10-01 15:57:13.497898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59690 is same with the state(6) to be set 00:23:03.605 starting I/O failed: -6 00:23:03.605 [2024-10-01 15:57:13.497906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59690 is same with the state(6) to be set 00:23:03.605 [2024-10-01 15:57:13.497913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59690 is same with tWrite completed with error (sct=0, sc=8) 00:23:03.605 he state(6) to be set 00:23:03.605 [2024-10-01 15:57:13.497921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59690 is same with the state(6) to be set 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.605 [2024-10-01 15:57:13.497928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59690 is same with the state(6) to be set 00:23:03.605 starting I/O failed: -6 00:23:03.605 [2024-10-01 15:57:13.497935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59690 is same with the state(6) to be set 00:23:03.605 [2024-10-01 15:57:13.497943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59690 is same with the state(6) to be set 00:23:03.605 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 [2024-10-01 15:57:13.498227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc587c0 is same with the state(6) to be set 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 [2024-10-01 15:57:13.498250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc587c0 is same with the state(6) to be set 00:23:03.606 [2024-10-01 15:57:13.498258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc587c0 is same with the state(6) to be set 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 [2024-10-01 15:57:13.498265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc587c0 is same with the state(6) to be set 00:23:03.606 starting I/O failed: -6 00:23:03.606 [2024-10-01 15:57:13.498272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc587c0 is same with the state(6) to be set 00:23:03.606 [2024-10-01 15:57:13.498282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc587c0 is same with the state(6) to be set 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 [2024-10-01 15:57:13.498288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc587c0 is same with the state(6) to be set 00:23:03.606 [2024-10-01 15:57:13.498295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc587c0 is same with the state(6) to be set 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 [2024-10-01 15:57:13.498598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.606 Write completed with error (sct=0, sc=8) 00:23:03.606 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 [2024-10-01 15:57:13.500114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5a030 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.500138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5a030 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.500145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5a030 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.500153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5a030 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.500159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5a030 is same with the state(6) to be set 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 [2024-10-01 15:57:13.500300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:03.607 NVMe io qpair process completion error 00:23:03.607 [2024-10-01 15:57:13.500412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae51c0 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.500425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae51c0 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.500432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae51c0 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.500439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae51c0 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.500446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae51c0 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.500453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae51c0 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.500460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae51c0 is same with the state(6) to be set 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 [2024-10-01 15:57:13.500730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5690 is same with the state(6) to be set 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 [2024-10-01 15:57:13.500746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5690 is same with the state(6) to be set 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 [2024-10-01 15:57:13.500756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5690 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.500765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5690 is same with the state(6) to be set 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 [2024-10-01 15:57:13.500771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5690 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.500777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5690 is same with the state(6) to be set 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 [2024-10-01 15:57:13.501192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59b60 is same with the state(6) to be set 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 [2024-10-01 15:57:13.501211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59b60 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.501219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59b60 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.501227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59b60 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.501233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59b60 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.501240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59b60 is same with the state(6) to be set 00:23:03.607 [2024-10-01 15:57:13.501239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.607 starting I/O failed: -6 00:23:03.607 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 [2024-10-01 15:57:13.502129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.608 starting I/O failed: -6 00:23:03.608 [2024-10-01 15:57:13.503114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:03.608 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 [2024-10-01 15:57:13.504928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:03.609 NVMe io qpair process completion error 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 [2024-10-01 15:57:13.506056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.609 starting I/O failed: -6 00:23:03.609 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 [2024-10-01 15:57:13.506958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 [2024-10-01 15:57:13.507969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.610 starting I/O failed: -6 00:23:03.610 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 [2024-10-01 15:57:13.509580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:03.611 NVMe io qpair process completion error 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 [2024-10-01 15:57:13.510682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 Write completed with error (sct=0, sc=8) 00:23:03.611 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 [2024-10-01 15:57:13.511533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 [2024-10-01 15:57:13.512552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.612 Write completed with error (sct=0, sc=8) 00:23:03.612 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 [2024-10-01 15:57:13.516521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:03.613 NVMe io qpair process completion error 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 [2024-10-01 15:57:13.517476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 starting I/O failed: -6 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.613 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 [2024-10-01 15:57:13.518397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 [2024-10-01 15:57:13.519425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.614 Write completed with error (sct=0, sc=8) 00:23:03.614 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 [2024-10-01 15:57:13.522622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:03.615 NVMe io qpair process completion error 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 starting I/O failed: -6 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.615 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 [2024-10-01 15:57:13.525206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.616 Write completed with error (sct=0, sc=8) 00:23:03.616 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 [2024-10-01 15:57:13.527065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:03.617 NVMe io qpair process completion error 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 [2024-10-01 15:57:13.528007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 starting I/O failed: -6 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.617 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 [2024-10-01 15:57:13.528948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 [2024-10-01 15:57:13.529968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.618 starting I/O failed: -6 00:23:03.618 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 [2024-10-01 15:57:13.532398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:03.619 NVMe io qpair process completion error 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 [2024-10-01 15:57:13.533478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 starting I/O failed: -6 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.619 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 [2024-10-01 15:57:13.534284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 [2024-10-01 15:57:13.535318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.620 starting I/O failed: -6 00:23:03.620 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 [2024-10-01 15:57:13.538770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:03.621 NVMe io qpair process completion error 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 [2024-10-01 15:57:13.539838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:03.621 starting I/O failed: -6 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 Write completed with error (sct=0, sc=8) 00:23:03.621 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 [2024-10-01 15:57:13.540728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 [2024-10-01 15:57:13.541715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.622 Write completed with error (sct=0, sc=8) 00:23:03.622 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 Write completed with error (sct=0, sc=8) 00:23:03.623 starting I/O failed: -6 00:23:03.623 [2024-10-01 15:57:13.544707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:03.623 NVMe io qpair process completion error 00:23:03.623 Initializing NVMe Controllers 00:23:03.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:03.623 Controller IO queue size 128, less than required. 00:23:03.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:03.623 Controller IO queue size 128, less than required. 00:23:03.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:03.623 Controller IO queue size 128, less than required. 00:23:03.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:03.623 Controller IO queue size 128, less than required. 00:23:03.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:03.623 Controller IO queue size 128, less than required. 00:23:03.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:03.623 Controller IO queue size 128, less than required. 00:23:03.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:03.623 Controller IO queue size 128, less than required. 00:23:03.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:03.623 Controller IO queue size 128, less than required. 00:23:03.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:03.623 Controller IO queue size 128, less than required. 00:23:03.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.623 Controller IO queue size 128, less than required. 00:23:03.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:03.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:03.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:03.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:03.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:03.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:03.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:03.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:03.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:03.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:03.623 Initialization complete. Launching workers. 00:23:03.623 ======================================================== 00:23:03.623 Latency(us) 00:23:03.623 Device Information : IOPS MiB/s Average min max 00:23:03.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2251.25 96.73 56862.41 854.23 107813.78 00:23:03.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2233.14 95.96 57358.02 693.85 117664.68 00:23:03.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2241.76 96.33 56563.05 676.51 105550.02 00:23:03.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2222.79 95.51 57056.47 702.06 103335.34 00:23:03.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2200.59 94.56 57646.38 908.70 101065.67 00:23:03.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2215.03 95.18 57284.18 866.65 99455.89 00:23:03.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2143.03 92.08 59247.73 915.11 97694.46 00:23:03.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2159.20 92.78 58831.54 651.24 111325.50 00:23:03.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2158.98 92.77 58853.23 944.61 97522.42 00:23:03.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2122.77 91.21 59279.04 703.83 96199.28 00:23:03.623 ======================================================== 00:23:03.623 Total : 21948.56 943.10 57879.25 651.24 117664.68 00:23:03.623 00:23:03.623 [2024-10-01 15:57:13.547635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f4e60 is same with the state(6) to be set 00:23:03.623 [2024-10-01 15:57:13.547679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f54c0 is same with the state(6) to be set 00:23:03.623 [2024-10-01 15:57:13.547709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ee960 is same with the state(6) to be set 00:23:03.623 [2024-10-01 15:57:13.547740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eec90 is same with the state(6) to be set 00:23:03.623 [2024-10-01 15:57:13.547770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eefc0 is same with the state(6) to be set 00:23:03.623 [2024-10-01 15:57:13.547797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f0bb0 is same with the state(6) to be set 00:23:03.623 [2024-10-01 15:57:13.547825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f5190 is same with the state(6) to be set 00:23:03.623 [2024-10-01 15:57:13.547853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f09d0 is same with the state(6) to be set 00:23:03.624 [2024-10-01 15:57:13.547902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ee630 is same with the state(6) to be set 00:23:03.624 [2024-10-01 15:57:13.547931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f07f0 is same with the state(6) to be set 00:23:03.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:03.883 15:57:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2500201 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2500201 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 2500201 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.819 rmmod nvme_tcp 00:23:04.819 rmmod nvme_fabrics 00:23:04.819 rmmod nvme_keyring 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n 2499856 ']' 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # killprocess 2499856 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 2499856 ']' 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 2499856 00:23:04.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2499856) - No such process 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 2499856 is not found' 00:23:04.819 Process with pid 2499856 is not found 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.819 15:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.353 15:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:07.353 00:23:07.353 real 0m10.406s 00:23:07.353 user 0m27.339s 00:23:07.353 sys 0m5.271s 00:23:07.353 15:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.353 15:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:07.353 ************************************ 00:23:07.353 END TEST nvmf_shutdown_tc4 00:23:07.353 ************************************ 00:23:07.353 15:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:07.353 00:23:07.353 real 0m42.854s 00:23:07.353 user 1m47.906s 00:23:07.353 sys 0m14.395s 00:23:07.353 15:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.353 15:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.353 ************************************ 00:23:07.353 END TEST nvmf_shutdown 00:23:07.353 ************************************ 00:23:07.353 15:57:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:07.353 00:23:07.353 real 12m5.769s 00:23:07.353 user 26m13.535s 00:23:07.353 sys 3m39.412s 00:23:07.353 15:57:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.353 15:57:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:07.353 ************************************ 00:23:07.353 END TEST nvmf_target_extra 00:23:07.353 ************************************ 00:23:07.353 15:57:17 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:07.353 15:57:17 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:07.353 15:57:17 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:07.353 15:57:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:07.353 ************************************ 00:23:07.353 START TEST nvmf_host 00:23:07.353 ************************************ 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:07.354 * Looking for test storage... 00:23:07.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:07.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.354 --rc genhtml_branch_coverage=1 00:23:07.354 --rc genhtml_function_coverage=1 00:23:07.354 --rc genhtml_legend=1 00:23:07.354 --rc geninfo_all_blocks=1 00:23:07.354 --rc geninfo_unexecuted_blocks=1 00:23:07.354 00:23:07.354 ' 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:07.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.354 --rc genhtml_branch_coverage=1 00:23:07.354 --rc genhtml_function_coverage=1 00:23:07.354 --rc genhtml_legend=1 00:23:07.354 --rc geninfo_all_blocks=1 00:23:07.354 --rc geninfo_unexecuted_blocks=1 00:23:07.354 00:23:07.354 ' 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:07.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.354 --rc genhtml_branch_coverage=1 00:23:07.354 --rc genhtml_function_coverage=1 00:23:07.354 --rc genhtml_legend=1 00:23:07.354 --rc geninfo_all_blocks=1 00:23:07.354 --rc geninfo_unexecuted_blocks=1 00:23:07.354 00:23:07.354 ' 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:07.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.354 --rc genhtml_branch_coverage=1 00:23:07.354 --rc genhtml_function_coverage=1 00:23:07.354 --rc genhtml_legend=1 00:23:07.354 --rc geninfo_all_blocks=1 00:23:07.354 --rc geninfo_unexecuted_blocks=1 00:23:07.354 00:23:07.354 ' 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.354 15:57:17 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.355 ************************************ 00:23:07.355 START TEST nvmf_multicontroller 00:23:07.355 ************************************ 00:23:07.355 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:07.355 * Looking for test storage... 00:23:07.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:07.613 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:07.613 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:23:07.613 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:07.613 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:07.613 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.613 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.613 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.613 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.613 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.613 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:07.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.614 --rc genhtml_branch_coverage=1 00:23:07.614 --rc genhtml_function_coverage=1 00:23:07.614 --rc genhtml_legend=1 00:23:07.614 --rc geninfo_all_blocks=1 00:23:07.614 --rc geninfo_unexecuted_blocks=1 00:23:07.614 00:23:07.614 ' 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:07.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.614 --rc genhtml_branch_coverage=1 00:23:07.614 --rc genhtml_function_coverage=1 00:23:07.614 --rc genhtml_legend=1 00:23:07.614 --rc geninfo_all_blocks=1 00:23:07.614 --rc geninfo_unexecuted_blocks=1 00:23:07.614 00:23:07.614 ' 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:07.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.614 --rc genhtml_branch_coverage=1 00:23:07.614 --rc genhtml_function_coverage=1 00:23:07.614 --rc genhtml_legend=1 00:23:07.614 --rc geninfo_all_blocks=1 00:23:07.614 --rc geninfo_unexecuted_blocks=1 00:23:07.614 00:23:07.614 ' 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:07.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.614 --rc genhtml_branch_coverage=1 00:23:07.614 --rc genhtml_function_coverage=1 00:23:07.614 --rc genhtml_legend=1 00:23:07.614 --rc geninfo_all_blocks=1 00:23:07.614 --rc geninfo_unexecuted_blocks=1 00:23:07.614 00:23:07.614 ' 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.614 15:57:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.198 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:14.199 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:14.199 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:14.199 Found net devices under 0000:86:00.0: cvl_0_0 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:14.199 Found net devices under 0000:86:00.1: cvl_0_1 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:14.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:23:14.199 00:23:14.199 --- 10.0.0.2 ping statistics --- 00:23:14.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.199 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:23:14.199 00:23:14.199 --- 10.0.0.1 ping statistics --- 00:23:14.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.199 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=2505259 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 2505259 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2505259 ']' 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:14.199 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 [2024-10-01 15:57:23.670857] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:23:14.200 [2024-10-01 15:57:23.670907] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.200 [2024-10-01 15:57:23.726149] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:14.200 [2024-10-01 15:57:23.803514] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.200 [2024-10-01 15:57:23.803552] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.200 [2024-10-01 15:57:23.803559] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.200 [2024-10-01 15:57:23.803566] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.200 [2024-10-01 15:57:23.803571] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.200 [2024-10-01 15:57:23.803626] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.200 [2024-10-01 15:57:23.803659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.200 [2024-10-01 15:57:23.803660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 [2024-10-01 15:57:23.947229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 Malloc0 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.200 15:57:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 [2024-10-01 15:57:24.006155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 [2024-10-01 15:57:24.014057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 Malloc1 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2505286 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2505286 /var/tmp/bdevperf.sock 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2505286 ']' 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:14.200 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.139 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:15.139 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:15.139 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:15.139 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.139 15:57:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.139 NVMe0n1 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.139 1 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.139 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.139 request: 00:23:15.139 { 00:23:15.139 "name": "NVMe0", 00:23:15.139 "trtype": "tcp", 00:23:15.139 "traddr": "10.0.0.2", 00:23:15.139 "adrfam": "ipv4", 00:23:15.139 "trsvcid": "4420", 00:23:15.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.139 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:15.139 "hostaddr": "10.0.0.1", 00:23:15.139 "prchk_reftag": false, 00:23:15.139 "prchk_guard": false, 00:23:15.139 "hdgst": false, 00:23:15.139 "ddgst": false, 00:23:15.139 "allow_unrecognized_csi": false, 00:23:15.139 "method": "bdev_nvme_attach_controller", 00:23:15.139 "req_id": 1 00:23:15.139 } 00:23:15.139 Got JSON-RPC error response 00:23:15.139 response: 00:23:15.139 { 00:23:15.139 "code": -114, 00:23:15.139 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:15.139 } 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.140 request: 00:23:15.140 { 00:23:15.140 "name": "NVMe0", 00:23:15.140 "trtype": "tcp", 00:23:15.140 "traddr": "10.0.0.2", 00:23:15.140 "adrfam": "ipv4", 00:23:15.140 "trsvcid": "4420", 00:23:15.140 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:15.140 "hostaddr": "10.0.0.1", 00:23:15.140 "prchk_reftag": false, 00:23:15.140 "prchk_guard": false, 00:23:15.140 "hdgst": false, 00:23:15.140 "ddgst": false, 00:23:15.140 "allow_unrecognized_csi": false, 00:23:15.140 "method": "bdev_nvme_attach_controller", 00:23:15.140 "req_id": 1 00:23:15.140 } 00:23:15.140 Got JSON-RPC error response 00:23:15.140 response: 00:23:15.140 { 00:23:15.140 "code": -114, 00:23:15.140 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:15.140 } 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.140 request: 00:23:15.140 { 00:23:15.140 "name": "NVMe0", 00:23:15.140 "trtype": "tcp", 00:23:15.140 "traddr": "10.0.0.2", 00:23:15.140 "adrfam": "ipv4", 00:23:15.140 "trsvcid": "4420", 00:23:15.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.140 "hostaddr": "10.0.0.1", 00:23:15.140 "prchk_reftag": false, 00:23:15.140 "prchk_guard": false, 00:23:15.140 "hdgst": false, 00:23:15.140 "ddgst": false, 00:23:15.140 "multipath": "disable", 00:23:15.140 "allow_unrecognized_csi": false, 00:23:15.140 "method": "bdev_nvme_attach_controller", 00:23:15.140 "req_id": 1 00:23:15.140 } 00:23:15.140 Got JSON-RPC error response 00:23:15.140 response: 00:23:15.140 { 00:23:15.140 "code": -114, 00:23:15.140 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:15.140 } 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.140 request: 00:23:15.140 { 00:23:15.140 "name": "NVMe0", 00:23:15.140 "trtype": "tcp", 00:23:15.140 "traddr": "10.0.0.2", 00:23:15.140 "adrfam": "ipv4", 00:23:15.140 "trsvcid": "4420", 00:23:15.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.140 "hostaddr": "10.0.0.1", 00:23:15.140 "prchk_reftag": false, 00:23:15.140 "prchk_guard": false, 00:23:15.140 "hdgst": false, 00:23:15.140 "ddgst": false, 00:23:15.140 "multipath": "failover", 00:23:15.140 "allow_unrecognized_csi": false, 00:23:15.140 "method": "bdev_nvme_attach_controller", 00:23:15.140 "req_id": 1 00:23:15.140 } 00:23:15.140 Got JSON-RPC error response 00:23:15.140 response: 00:23:15.140 { 00:23:15.140 "code": -114, 00:23:15.140 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:15.140 } 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.140 NVMe0n1 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.140 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.400 00:23:15.400 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.400 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:15.400 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:15.400 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.400 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.400 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.400 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:15.400 15:57:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:16.777 { 00:23:16.777 "results": [ 00:23:16.777 { 00:23:16.777 "job": "NVMe0n1", 00:23:16.777 "core_mask": "0x1", 00:23:16.777 "workload": "write", 00:23:16.777 "status": "finished", 00:23:16.777 "queue_depth": 128, 00:23:16.777 "io_size": 4096, 00:23:16.777 "runtime": 1.002845, 00:23:16.777 "iops": 25017.82428989525, 00:23:16.777 "mibps": 97.72587613240331, 00:23:16.777 "io_failed": 0, 00:23:16.777 "io_timeout": 0, 00:23:16.777 "avg_latency_us": 5109.73109930552, 00:23:16.777 "min_latency_us": 3167.5733333333333, 00:23:16.777 "max_latency_us": 13232.030476190475 00:23:16.777 } 00:23:16.777 ], 00:23:16.777 "core_count": 1 00:23:16.777 } 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2505286 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2505286 ']' 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2505286 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2505286 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2505286' 00:23:16.777 killing process with pid 2505286 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2505286 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2505286 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:16.777 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:16.777 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:16.777 [2024-10-01 15:57:24.118144] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:23:16.777 [2024-10-01 15:57:24.118198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505286 ] 00:23:16.777 [2024-10-01 15:57:24.185453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.777 [2024-10-01 15:57:24.264235] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.777 [2024-10-01 15:57:25.483750] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name f26b29f2-d5f0-4e46-a6bd-c4a4f0b458ed already exists 00:23:16.777 [2024-10-01 15:57:25.483776] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:f26b29f2-d5f0-4e46-a6bd-c4a4f0b458ed alias for bdev NVMe1n1 00:23:16.777 [2024-10-01 15:57:25.483784] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:16.777 Running I/O for 1 seconds... 00:23:16.777 24961.00 IOPS, 97.50 MiB/s 00:23:16.777 Latency(us) 00:23:16.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.777 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:16.777 NVMe0n1 : 1.00 25017.82 97.73 0.00 0.00 5109.73 3167.57 13232.03 00:23:16.777 =================================================================================================================== 00:23:16.777 Total : 25017.82 97.73 0.00 0.00 5109.73 3167.57 13232.03 00:23:16.777 Received shutdown signal, test time was about 1.000000 seconds 00:23:16.777 00:23:16.778 Latency(us) 00:23:16.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.778 =================================================================================================================== 00:23:16.778 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.778 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:16.778 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:16.778 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:16.778 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:16.778 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:16.778 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:16.778 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:16.778 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:16.778 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:16.778 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:16.778 rmmod nvme_tcp 00:23:16.778 rmmod nvme_fabrics 00:23:17.037 rmmod nvme_keyring 00:23:17.037 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:17.037 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:17.037 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:17.037 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 2505259 ']' 00:23:17.037 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 2505259 00:23:17.037 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2505259 ']' 00:23:17.037 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2505259 00:23:17.037 15:57:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:17.037 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:17.037 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2505259 00:23:17.037 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:17.037 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:17.037 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2505259' 00:23:17.037 killing process with pid 2505259 00:23:17.037 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2505259 00:23:17.037 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2505259 00:23:17.296 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:17.296 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:17.296 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:17.296 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:17.296 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:23:17.296 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:17.296 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:23:17.297 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:17.297 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:17.297 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.297 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.297 15:57:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.202 15:57:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:19.202 00:23:19.202 real 0m11.898s 00:23:19.202 user 0m15.226s 00:23:19.202 sys 0m5.194s 00:23:19.202 15:57:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:19.202 15:57:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.202 ************************************ 00:23:19.202 END TEST nvmf_multicontroller 00:23:19.202 ************************************ 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.462 ************************************ 00:23:19.462 START TEST nvmf_aer 00:23:19.462 ************************************ 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:19.462 * Looking for test storage... 00:23:19.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:19.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.462 --rc genhtml_branch_coverage=1 00:23:19.462 --rc genhtml_function_coverage=1 00:23:19.462 --rc genhtml_legend=1 00:23:19.462 --rc geninfo_all_blocks=1 00:23:19.462 --rc geninfo_unexecuted_blocks=1 00:23:19.462 00:23:19.462 ' 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:19.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.462 --rc genhtml_branch_coverage=1 00:23:19.462 --rc genhtml_function_coverage=1 00:23:19.462 --rc genhtml_legend=1 00:23:19.462 --rc geninfo_all_blocks=1 00:23:19.462 --rc geninfo_unexecuted_blocks=1 00:23:19.462 00:23:19.462 ' 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:19.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.462 --rc genhtml_branch_coverage=1 00:23:19.462 --rc genhtml_function_coverage=1 00:23:19.462 --rc genhtml_legend=1 00:23:19.462 --rc geninfo_all_blocks=1 00:23:19.462 --rc geninfo_unexecuted_blocks=1 00:23:19.462 00:23:19.462 ' 00:23:19.462 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:19.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.462 --rc genhtml_branch_coverage=1 00:23:19.462 --rc genhtml_function_coverage=1 00:23:19.462 --rc genhtml_legend=1 00:23:19.462 --rc geninfo_all_blocks=1 00:23:19.462 --rc geninfo_unexecuted_blocks=1 00:23:19.462 00:23:19.462 ' 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:19.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:19.463 15:57:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:26.033 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:26.033 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:26.033 Found net devices under 0000:86:00.0: cvl_0_0 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:26.033 Found net devices under 0000:86:00.1: cvl_0_1 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:26.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:23:26.033 00:23:26.033 --- 10.0.0.2 ping statistics --- 00:23:26.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.033 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:23:26.033 00:23:26.033 --- 10.0.0.1 ping statistics --- 00:23:26.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.033 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:23:26.033 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=2509285 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 2509285 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2509285 ']' 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:26.034 15:57:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.034 [2024-10-01 15:57:35.602459] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:23:26.034 [2024-10-01 15:57:35.602511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.034 [2024-10-01 15:57:35.660010] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:26.034 [2024-10-01 15:57:35.741208] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.034 [2024-10-01 15:57:35.741248] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.034 [2024-10-01 15:57:35.741255] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.034 [2024-10-01 15:57:35.741261] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.034 [2024-10-01 15:57:35.741266] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.034 [2024-10-01 15:57:35.742883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.034 [2024-10-01 15:57:35.742918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.034 [2024-10-01 15:57:35.743025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.034 [2024-10-01 15:57:35.743026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:26.294 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.294 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:26.294 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:26.294 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:26.294 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.294 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.294 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:26.294 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.294 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.294 [2024-10-01 15:57:36.476212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.294 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.294 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:26.294 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.294 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.553 Malloc0 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.553 [2024-10-01 15:57:36.528022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.553 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.553 [ 00:23:26.553 { 00:23:26.553 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:26.553 "subtype": "Discovery", 00:23:26.553 "listen_addresses": [], 00:23:26.553 "allow_any_host": true, 00:23:26.553 "hosts": [] 00:23:26.553 }, 00:23:26.553 { 00:23:26.553 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.553 "subtype": "NVMe", 00:23:26.554 "listen_addresses": [ 00:23:26.554 { 00:23:26.554 "trtype": "TCP", 00:23:26.554 "adrfam": "IPv4", 00:23:26.554 "traddr": "10.0.0.2", 00:23:26.554 "trsvcid": "4420" 00:23:26.554 } 00:23:26.554 ], 00:23:26.554 "allow_any_host": true, 00:23:26.554 "hosts": [], 00:23:26.554 "serial_number": "SPDK00000000000001", 00:23:26.554 "model_number": "SPDK bdev Controller", 00:23:26.554 "max_namespaces": 2, 00:23:26.554 "min_cntlid": 1, 00:23:26.554 "max_cntlid": 65519, 00:23:26.554 "namespaces": [ 00:23:26.554 { 00:23:26.554 "nsid": 1, 00:23:26.554 "bdev_name": "Malloc0", 00:23:26.554 "name": "Malloc0", 00:23:26.554 "nguid": "B6669CBA751D49A28565CCDBFB9D99B7", 00:23:26.554 "uuid": "b6669cba-751d-49a2-8565-ccdbfb9d99b7" 00:23:26.554 } 00:23:26.554 ] 00:23:26.554 } 00:23:26.554 ] 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2509533 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:26.554 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.813 Malloc1 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.813 Asynchronous Event Request test 00:23:26.813 Attaching to 10.0.0.2 00:23:26.813 Attached to 10.0.0.2 00:23:26.813 Registering asynchronous event callbacks... 00:23:26.813 Starting namespace attribute notice tests for all controllers... 00:23:26.813 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:26.813 aer_cb - Changed Namespace 00:23:26.813 Cleaning up... 00:23:26.813 [ 00:23:26.813 { 00:23:26.813 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:26.813 "subtype": "Discovery", 00:23:26.813 "listen_addresses": [], 00:23:26.813 "allow_any_host": true, 00:23:26.813 "hosts": [] 00:23:26.813 }, 00:23:26.813 { 00:23:26.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.813 "subtype": "NVMe", 00:23:26.813 "listen_addresses": [ 00:23:26.813 { 00:23:26.813 "trtype": "TCP", 00:23:26.813 "adrfam": "IPv4", 00:23:26.813 "traddr": "10.0.0.2", 00:23:26.813 "trsvcid": "4420" 00:23:26.813 } 00:23:26.813 ], 00:23:26.813 "allow_any_host": true, 00:23:26.813 "hosts": [], 00:23:26.813 "serial_number": "SPDK00000000000001", 00:23:26.813 "model_number": "SPDK bdev Controller", 00:23:26.813 "max_namespaces": 2, 00:23:26.813 "min_cntlid": 1, 00:23:26.813 "max_cntlid": 65519, 00:23:26.813 "namespaces": [ 00:23:26.813 { 00:23:26.813 "nsid": 1, 00:23:26.813 "bdev_name": "Malloc0", 00:23:26.813 "name": "Malloc0", 00:23:26.813 "nguid": "B6669CBA751D49A28565CCDBFB9D99B7", 00:23:26.813 "uuid": "b6669cba-751d-49a2-8565-ccdbfb9d99b7" 00:23:26.813 }, 00:23:26.813 { 00:23:26.813 "nsid": 2, 00:23:26.813 "bdev_name": "Malloc1", 00:23:26.813 "name": "Malloc1", 00:23:26.813 "nguid": "3660E5E7253C49C4BB5E0787F4432FAE", 00:23:26.813 "uuid": "3660e5e7-253c-49c4-bb5e-0787f4432fae" 00:23:26.813 } 00:23:26.813 ] 00:23:26.813 } 00:23:26.813 ] 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2509533 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:26.813 rmmod nvme_tcp 00:23:26.813 rmmod nvme_fabrics 00:23:26.813 rmmod nvme_keyring 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 2509285 ']' 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 2509285 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2509285 ']' 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2509285 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.813 15:57:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2509285 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2509285' 00:23:27.073 killing process with pid 2509285 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2509285 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2509285 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.073 15:57:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:29.614 00:23:29.614 real 0m9.840s 00:23:29.614 user 0m7.642s 00:23:29.614 sys 0m4.863s 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.614 ************************************ 00:23:29.614 END TEST nvmf_aer 00:23:29.614 ************************************ 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.614 ************************************ 00:23:29.614 START TEST nvmf_async_init 00:23:29.614 ************************************ 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:29.614 * Looking for test storage... 00:23:29.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:29.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.614 --rc genhtml_branch_coverage=1 00:23:29.614 --rc genhtml_function_coverage=1 00:23:29.614 --rc genhtml_legend=1 00:23:29.614 --rc geninfo_all_blocks=1 00:23:29.614 --rc geninfo_unexecuted_blocks=1 00:23:29.614 00:23:29.614 ' 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:29.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.614 --rc genhtml_branch_coverage=1 00:23:29.614 --rc genhtml_function_coverage=1 00:23:29.614 --rc genhtml_legend=1 00:23:29.614 --rc geninfo_all_blocks=1 00:23:29.614 --rc geninfo_unexecuted_blocks=1 00:23:29.614 00:23:29.614 ' 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:29.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.614 --rc genhtml_branch_coverage=1 00:23:29.614 --rc genhtml_function_coverage=1 00:23:29.614 --rc genhtml_legend=1 00:23:29.614 --rc geninfo_all_blocks=1 00:23:29.614 --rc geninfo_unexecuted_blocks=1 00:23:29.614 00:23:29.614 ' 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:29.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.614 --rc genhtml_branch_coverage=1 00:23:29.614 --rc genhtml_function_coverage=1 00:23:29.614 --rc genhtml_legend=1 00:23:29.614 --rc geninfo_all_blocks=1 00:23:29.614 --rc geninfo_unexecuted_blocks=1 00:23:29.614 00:23:29.614 ' 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:29.614 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1db6fd52932c49eba2a599539615a77e 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:29.615 15:57:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:36.185 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:36.186 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:36.186 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:36.186 Found net devices under 0000:86:00.0: cvl_0_0 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:36.186 Found net devices under 0000:86:00.1: cvl_0_1 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:36.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:23:36.186 00:23:36.186 --- 10.0.0.2 ping statistics --- 00:23:36.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.186 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:23:36.186 00:23:36.186 --- 10.0.0.1 ping statistics --- 00:23:36.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.186 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=2513062 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 2513062 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2513062 ']' 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:36.186 15:57:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.186 [2024-10-01 15:57:45.557815] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:23:36.186 [2024-10-01 15:57:45.557873] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.186 [2024-10-01 15:57:45.630930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.186 [2024-10-01 15:57:45.709435] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.186 [2024-10-01 15:57:45.709470] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.186 [2024-10-01 15:57:45.709480] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.186 [2024-10-01 15:57:45.709486] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.186 [2024-10-01 15:57:45.709491] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.186 [2024-10-01 15:57:45.709508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.444 [2024-10-01 15:57:46.424720] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.444 null0 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1db6fd52932c49eba2a599539615a77e 00:23:36.444 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.445 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.445 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.445 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:36.445 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.445 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.445 [2024-10-01 15:57:46.472957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.445 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.445 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:36.445 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.445 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.703 nvme0n1 00:23:36.703 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.703 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:36.703 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.703 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.703 [ 00:23:36.703 { 00:23:36.703 "name": "nvme0n1", 00:23:36.703 "aliases": [ 00:23:36.703 "1db6fd52-932c-49eb-a2a5-99539615a77e" 00:23:36.703 ], 00:23:36.703 "product_name": "NVMe disk", 00:23:36.703 "block_size": 512, 00:23:36.703 "num_blocks": 2097152, 00:23:36.703 "uuid": "1db6fd52-932c-49eb-a2a5-99539615a77e", 00:23:36.703 "numa_id": 1, 00:23:36.703 "assigned_rate_limits": { 00:23:36.703 "rw_ios_per_sec": 0, 00:23:36.703 "rw_mbytes_per_sec": 0, 00:23:36.703 "r_mbytes_per_sec": 0, 00:23:36.703 "w_mbytes_per_sec": 0 00:23:36.703 }, 00:23:36.703 "claimed": false, 00:23:36.703 "zoned": false, 00:23:36.703 "supported_io_types": { 00:23:36.703 "read": true, 00:23:36.703 "write": true, 00:23:36.703 "unmap": false, 00:23:36.703 "flush": true, 00:23:36.703 "reset": true, 00:23:36.703 "nvme_admin": true, 00:23:36.703 "nvme_io": true, 00:23:36.703 "nvme_io_md": false, 00:23:36.703 "write_zeroes": true, 00:23:36.703 "zcopy": false, 00:23:36.703 "get_zone_info": false, 00:23:36.703 "zone_management": false, 00:23:36.703 "zone_append": false, 00:23:36.703 "compare": true, 00:23:36.703 "compare_and_write": true, 00:23:36.703 "abort": true, 00:23:36.703 "seek_hole": false, 00:23:36.703 "seek_data": false, 00:23:36.703 "copy": true, 00:23:36.703 "nvme_iov_md": false 00:23:36.703 }, 00:23:36.703 "memory_domains": [ 00:23:36.703 { 00:23:36.703 "dma_device_id": "system", 00:23:36.703 "dma_device_type": 1 00:23:36.703 } 00:23:36.703 ], 00:23:36.703 "driver_specific": { 00:23:36.703 "nvme": [ 00:23:36.703 { 00:23:36.703 "trid": { 00:23:36.703 "trtype": "TCP", 00:23:36.703 "adrfam": "IPv4", 00:23:36.703 "traddr": "10.0.0.2", 00:23:36.703 "trsvcid": "4420", 00:23:36.703 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:36.703 }, 00:23:36.703 "ctrlr_data": { 00:23:36.703 "cntlid": 1, 00:23:36.703 "vendor_id": "0x8086", 00:23:36.703 "model_number": "SPDK bdev Controller", 00:23:36.704 "serial_number": "00000000000000000000", 00:23:36.704 "firmware_revision": "25.01", 00:23:36.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.704 "oacs": { 00:23:36.704 "security": 0, 00:23:36.704 "format": 0, 00:23:36.704 "firmware": 0, 00:23:36.704 "ns_manage": 0 00:23:36.704 }, 00:23:36.704 "multi_ctrlr": true, 00:23:36.704 "ana_reporting": false 00:23:36.704 }, 00:23:36.704 "vs": { 00:23:36.704 "nvme_version": "1.3" 00:23:36.704 }, 00:23:36.704 "ns_data": { 00:23:36.704 "id": 1, 00:23:36.704 "can_share": true 00:23:36.704 } 00:23:36.704 } 00:23:36.704 ], 00:23:36.704 "mp_policy": "active_passive" 00:23:36.704 } 00:23:36.704 } 00:23:36.704 ] 00:23:36.704 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.704 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:36.704 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.704 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.704 [2024-10-01 15:57:46.733481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.704 [2024-10-01 15:57:46.733536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f52e0 (9): Bad file descriptor 00:23:36.704 [2024-10-01 15:57:46.864942] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:36.704 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.704 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:36.704 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.704 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.704 [ 00:23:36.704 { 00:23:36.704 "name": "nvme0n1", 00:23:36.704 "aliases": [ 00:23:36.704 "1db6fd52-932c-49eb-a2a5-99539615a77e" 00:23:36.704 ], 00:23:36.704 "product_name": "NVMe disk", 00:23:36.704 "block_size": 512, 00:23:36.704 "num_blocks": 2097152, 00:23:36.704 "uuid": "1db6fd52-932c-49eb-a2a5-99539615a77e", 00:23:36.704 "numa_id": 1, 00:23:36.704 "assigned_rate_limits": { 00:23:36.704 "rw_ios_per_sec": 0, 00:23:36.704 "rw_mbytes_per_sec": 0, 00:23:36.704 "r_mbytes_per_sec": 0, 00:23:36.704 "w_mbytes_per_sec": 0 00:23:36.704 }, 00:23:36.704 "claimed": false, 00:23:36.704 "zoned": false, 00:23:36.704 "supported_io_types": { 00:23:36.704 "read": true, 00:23:36.704 "write": true, 00:23:36.704 "unmap": false, 00:23:36.704 "flush": true, 00:23:36.704 "reset": true, 00:23:36.704 "nvme_admin": true, 00:23:36.704 "nvme_io": true, 00:23:36.704 "nvme_io_md": false, 00:23:36.704 "write_zeroes": true, 00:23:36.704 "zcopy": false, 00:23:36.704 "get_zone_info": false, 00:23:36.704 "zone_management": false, 00:23:36.704 "zone_append": false, 00:23:36.704 "compare": true, 00:23:36.704 "compare_and_write": true, 00:23:36.704 "abort": true, 00:23:36.704 "seek_hole": false, 00:23:36.704 "seek_data": false, 00:23:36.704 "copy": true, 00:23:36.704 "nvme_iov_md": false 00:23:36.704 }, 00:23:36.704 "memory_domains": [ 00:23:36.704 { 00:23:36.704 "dma_device_id": "system", 00:23:36.704 "dma_device_type": 1 00:23:36.704 } 00:23:36.704 ], 00:23:36.704 "driver_specific": { 00:23:36.704 "nvme": [ 00:23:36.704 { 00:23:36.704 "trid": { 00:23:36.704 "trtype": "TCP", 00:23:36.704 "adrfam": "IPv4", 00:23:36.704 "traddr": "10.0.0.2", 00:23:36.704 "trsvcid": "4420", 00:23:36.704 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:36.704 }, 00:23:36.704 "ctrlr_data": { 00:23:36.704 "cntlid": 2, 00:23:36.704 "vendor_id": "0x8086", 00:23:36.704 "model_number": "SPDK bdev Controller", 00:23:36.704 "serial_number": "00000000000000000000", 00:23:36.704 "firmware_revision": "25.01", 00:23:36.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.704 "oacs": { 00:23:36.704 "security": 0, 00:23:36.704 "format": 0, 00:23:36.704 "firmware": 0, 00:23:36.704 "ns_manage": 0 00:23:36.704 }, 00:23:36.704 "multi_ctrlr": true, 00:23:36.704 "ana_reporting": false 00:23:36.704 }, 00:23:36.704 "vs": { 00:23:36.704 "nvme_version": "1.3" 00:23:36.704 }, 00:23:36.704 "ns_data": { 00:23:36.704 "id": 1, 00:23:36.704 "can_share": true 00:23:36.704 } 00:23:36.704 } 00:23:36.704 ], 00:23:36.704 "mp_policy": "active_passive" 00:23:36.704 } 00:23:36.704 } 00:23:36.704 ] 00:23:36.704 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.704 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.704 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.704 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.963 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.963 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:36.963 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.RzAt7gV2IC 00:23:36.963 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:36.963 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.RzAt7gV2IC 00:23:36.963 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.RzAt7gV2IC 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.964 [2024-10-01 15:57:46.934082] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.964 [2024-10-01 15:57:46.934173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.964 15:57:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.964 [2024-10-01 15:57:46.958159] bdev_nvme_rpc.c: 516:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.964 nvme0n1 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.964 [ 00:23:36.964 { 00:23:36.964 "name": "nvme0n1", 00:23:36.964 "aliases": [ 00:23:36.964 "1db6fd52-932c-49eb-a2a5-99539615a77e" 00:23:36.964 ], 00:23:36.964 "product_name": "NVMe disk", 00:23:36.964 "block_size": 512, 00:23:36.964 "num_blocks": 2097152, 00:23:36.964 "uuid": "1db6fd52-932c-49eb-a2a5-99539615a77e", 00:23:36.964 "numa_id": 1, 00:23:36.964 "assigned_rate_limits": { 00:23:36.964 "rw_ios_per_sec": 0, 00:23:36.964 "rw_mbytes_per_sec": 0, 00:23:36.964 "r_mbytes_per_sec": 0, 00:23:36.964 "w_mbytes_per_sec": 0 00:23:36.964 }, 00:23:36.964 "claimed": false, 00:23:36.964 "zoned": false, 00:23:36.964 "supported_io_types": { 00:23:36.964 "read": true, 00:23:36.964 "write": true, 00:23:36.964 "unmap": false, 00:23:36.964 "flush": true, 00:23:36.964 "reset": true, 00:23:36.964 "nvme_admin": true, 00:23:36.964 "nvme_io": true, 00:23:36.964 "nvme_io_md": false, 00:23:36.964 "write_zeroes": true, 00:23:36.964 "zcopy": false, 00:23:36.964 "get_zone_info": false, 00:23:36.964 "zone_management": false, 00:23:36.964 "zone_append": false, 00:23:36.964 "compare": true, 00:23:36.964 "compare_and_write": true, 00:23:36.964 "abort": true, 00:23:36.964 "seek_hole": false, 00:23:36.964 "seek_data": false, 00:23:36.964 "copy": true, 00:23:36.964 "nvme_iov_md": false 00:23:36.964 }, 00:23:36.964 "memory_domains": [ 00:23:36.964 { 00:23:36.964 "dma_device_id": "system", 00:23:36.964 "dma_device_type": 1 00:23:36.964 } 00:23:36.964 ], 00:23:36.964 "driver_specific": { 00:23:36.964 "nvme": [ 00:23:36.964 { 00:23:36.964 "trid": { 00:23:36.964 "trtype": "TCP", 00:23:36.964 "adrfam": "IPv4", 00:23:36.964 "traddr": "10.0.0.2", 00:23:36.964 "trsvcid": "4421", 00:23:36.964 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:36.964 }, 00:23:36.964 "ctrlr_data": { 00:23:36.964 "cntlid": 3, 00:23:36.964 "vendor_id": "0x8086", 00:23:36.964 "model_number": "SPDK bdev Controller", 00:23:36.964 "serial_number": "00000000000000000000", 00:23:36.964 "firmware_revision": "25.01", 00:23:36.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.964 "oacs": { 00:23:36.964 "security": 0, 00:23:36.964 "format": 0, 00:23:36.964 "firmware": 0, 00:23:36.964 "ns_manage": 0 00:23:36.964 }, 00:23:36.964 "multi_ctrlr": true, 00:23:36.964 "ana_reporting": false 00:23:36.964 }, 00:23:36.964 "vs": { 00:23:36.964 "nvme_version": "1.3" 00:23:36.964 }, 00:23:36.964 "ns_data": { 00:23:36.964 "id": 1, 00:23:36.964 "can_share": true 00:23:36.964 } 00:23:36.964 } 00:23:36.964 ], 00:23:36.964 "mp_policy": "active_passive" 00:23:36.964 } 00:23:36.964 } 00:23:36.964 ] 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.RzAt7gV2IC 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:36.964 rmmod nvme_tcp 00:23:36.964 rmmod nvme_fabrics 00:23:36.964 rmmod nvme_keyring 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 2513062 ']' 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 2513062 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2513062 ']' 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2513062 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:36.964 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2513062 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2513062' 00:23:37.223 killing process with pid 2513062 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2513062 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2513062 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.223 15:57:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:39.760 00:23:39.760 real 0m10.082s 00:23:39.760 user 0m3.901s 00:23:39.760 sys 0m4.772s 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.760 ************************************ 00:23:39.760 END TEST nvmf_async_init 00:23:39.760 ************************************ 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.760 ************************************ 00:23:39.760 START TEST dma 00:23:39.760 ************************************ 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:39.760 * Looking for test storage... 00:23:39.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.760 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:39.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.761 --rc genhtml_branch_coverage=1 00:23:39.761 --rc genhtml_function_coverage=1 00:23:39.761 --rc genhtml_legend=1 00:23:39.761 --rc geninfo_all_blocks=1 00:23:39.761 --rc geninfo_unexecuted_blocks=1 00:23:39.761 00:23:39.761 ' 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:39.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.761 --rc genhtml_branch_coverage=1 00:23:39.761 --rc genhtml_function_coverage=1 00:23:39.761 --rc genhtml_legend=1 00:23:39.761 --rc geninfo_all_blocks=1 00:23:39.761 --rc geninfo_unexecuted_blocks=1 00:23:39.761 00:23:39.761 ' 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:39.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.761 --rc genhtml_branch_coverage=1 00:23:39.761 --rc genhtml_function_coverage=1 00:23:39.761 --rc genhtml_legend=1 00:23:39.761 --rc geninfo_all_blocks=1 00:23:39.761 --rc geninfo_unexecuted_blocks=1 00:23:39.761 00:23:39.761 ' 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:39.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.761 --rc genhtml_branch_coverage=1 00:23:39.761 --rc genhtml_function_coverage=1 00:23:39.761 --rc genhtml_legend=1 00:23:39.761 --rc geninfo_all_blocks=1 00:23:39.761 --rc geninfo_unexecuted_blocks=1 00:23:39.761 00:23:39.761 ' 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:39.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:39.761 00:23:39.761 real 0m0.209s 00:23:39.761 user 0m0.127s 00:23:39.761 sys 0m0.097s 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:39.761 ************************************ 00:23:39.761 END TEST dma 00:23:39.761 ************************************ 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.761 ************************************ 00:23:39.761 START TEST nvmf_identify 00:23:39.761 ************************************ 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:39.761 * Looking for test storage... 00:23:39.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.761 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.762 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:40.042 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:40.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.043 --rc genhtml_branch_coverage=1 00:23:40.043 --rc genhtml_function_coverage=1 00:23:40.043 --rc genhtml_legend=1 00:23:40.043 --rc geninfo_all_blocks=1 00:23:40.043 --rc geninfo_unexecuted_blocks=1 00:23:40.043 00:23:40.043 ' 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:40.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.043 --rc genhtml_branch_coverage=1 00:23:40.043 --rc genhtml_function_coverage=1 00:23:40.043 --rc genhtml_legend=1 00:23:40.043 --rc geninfo_all_blocks=1 00:23:40.043 --rc geninfo_unexecuted_blocks=1 00:23:40.043 00:23:40.043 ' 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:40.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.043 --rc genhtml_branch_coverage=1 00:23:40.043 --rc genhtml_function_coverage=1 00:23:40.043 --rc genhtml_legend=1 00:23:40.043 --rc geninfo_all_blocks=1 00:23:40.043 --rc geninfo_unexecuted_blocks=1 00:23:40.043 00:23:40.043 ' 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:40.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.043 --rc genhtml_branch_coverage=1 00:23:40.043 --rc genhtml_function_coverage=1 00:23:40.043 --rc genhtml_legend=1 00:23:40.043 --rc geninfo_all_blocks=1 00:23:40.043 --rc geninfo_unexecuted_blocks=1 00:23:40.043 00:23:40.043 ' 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:40.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:40.043 15:57:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:46.626 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:46.626 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:46.626 Found net devices under 0000:86:00.0: cvl_0_0 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:46.626 Found net devices under 0000:86:00.1: cvl_0_1 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:46.626 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:46.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:23:46.627 00:23:46.627 --- 10.0.0.2 ping statistics --- 00:23:46.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.627 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:23:46.627 00:23:46.627 --- 10.0.0.1 ping statistics --- 00:23:46.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.627 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2516897 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2516897 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2516897 ']' 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.627 15:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.627 [2024-10-01 15:57:55.969066] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:23:46.627 [2024-10-01 15:57:55.969109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.627 [2024-10-01 15:57:56.039315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.627 [2024-10-01 15:57:56.120297] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.627 [2024-10-01 15:57:56.120334] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.627 [2024-10-01 15:57:56.120342] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.627 [2024-10-01 15:57:56.120347] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.627 [2024-10-01 15:57:56.120352] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.627 [2024-10-01 15:57:56.120414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.627 [2024-10-01 15:57:56.120521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.627 [2024-10-01 15:57:56.120629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.627 [2024-10-01 15:57:56.120630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.627 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.627 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:46.627 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:46.627 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.627 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.627 [2024-10-01 15:57:56.815495] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.888 Malloc0 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.888 [2024-10-01 15:57:56.903176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.888 [ 00:23:46.888 { 00:23:46.888 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:46.888 "subtype": "Discovery", 00:23:46.888 "listen_addresses": [ 00:23:46.888 { 00:23:46.888 "trtype": "TCP", 00:23:46.888 "adrfam": "IPv4", 00:23:46.888 "traddr": "10.0.0.2", 00:23:46.888 "trsvcid": "4420" 00:23:46.888 } 00:23:46.888 ], 00:23:46.888 "allow_any_host": true, 00:23:46.888 "hosts": [] 00:23:46.888 }, 00:23:46.888 { 00:23:46.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.888 "subtype": "NVMe", 00:23:46.888 "listen_addresses": [ 00:23:46.888 { 00:23:46.888 "trtype": "TCP", 00:23:46.888 "adrfam": "IPv4", 00:23:46.888 "traddr": "10.0.0.2", 00:23:46.888 "trsvcid": "4420" 00:23:46.888 } 00:23:46.888 ], 00:23:46.888 "allow_any_host": true, 00:23:46.888 "hosts": [], 00:23:46.888 "serial_number": "SPDK00000000000001", 00:23:46.888 "model_number": "SPDK bdev Controller", 00:23:46.888 "max_namespaces": 32, 00:23:46.888 "min_cntlid": 1, 00:23:46.888 "max_cntlid": 65519, 00:23:46.888 "namespaces": [ 00:23:46.888 { 00:23:46.888 "nsid": 1, 00:23:46.888 "bdev_name": "Malloc0", 00:23:46.888 "name": "Malloc0", 00:23:46.888 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:46.888 "eui64": "ABCDEF0123456789", 00:23:46.888 "uuid": "78c6d33a-639c-42ad-afb3-be60725f37f2" 00:23:46.888 } 00:23:46.888 ] 00:23:46.888 } 00:23:46.888 ] 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.888 15:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:46.888 [2024-10-01 15:57:56.953681] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:23:46.888 [2024-10-01 15:57:56.953715] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517145 ] 00:23:46.888 [2024-10-01 15:57:56.979841] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:46.888 [2024-10-01 15:57:56.979892] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:46.888 [2024-10-01 15:57:56.979897] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:46.888 [2024-10-01 15:57:56.979911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:46.888 [2024-10-01 15:57:56.979920] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:46.888 [2024-10-01 15:57:56.984156] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:46.888 [2024-10-01 15:57:56.984191] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1265760 0 00:23:46.888 [2024-10-01 15:57:56.991877] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:46.888 [2024-10-01 15:57:56.991892] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:46.888 [2024-10-01 15:57:56.991897] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:46.888 [2024-10-01 15:57:56.991900] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:46.888 [2024-10-01 15:57:56.991928] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.991934] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.991937] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1265760) 00:23:46.889 [2024-10-01 15:57:56.991949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:46.889 [2024-10-01 15:57:56.991967] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5480, cid 0, qid 0 00:23:46.889 [2024-10-01 15:57:56.998871] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.889 [2024-10-01 15:57:56.998879] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.889 [2024-10-01 15:57:56.998883] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.998887] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5480) on tqpair=0x1265760 00:23:46.889 [2024-10-01 15:57:56.998899] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:46.889 [2024-10-01 15:57:56.998905] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:46.889 [2024-10-01 15:57:56.998910] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:46.889 [2024-10-01 15:57:56.998924] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.998928] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.998931] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1265760) 00:23:46.889 [2024-10-01 15:57:56.998938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.889 [2024-10-01 15:57:56.998951] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5480, cid 0, qid 0 00:23:46.889 [2024-10-01 15:57:56.999109] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.889 [2024-10-01 15:57:56.999115] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.889 [2024-10-01 15:57:56.999118] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999121] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5480) on tqpair=0x1265760 00:23:46.889 [2024-10-01 15:57:56.999125] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:46.889 [2024-10-01 15:57:56.999132] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:46.889 [2024-10-01 15:57:56.999138] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999141] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999147] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1265760) 00:23:46.889 [2024-10-01 15:57:56.999153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.889 [2024-10-01 15:57:56.999163] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5480, cid 0, qid 0 00:23:46.889 [2024-10-01 15:57:56.999226] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.889 [2024-10-01 15:57:56.999231] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.889 [2024-10-01 15:57:56.999234] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999237] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5480) on tqpair=0x1265760 00:23:46.889 [2024-10-01 15:57:56.999242] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:46.889 [2024-10-01 15:57:56.999249] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:46.889 [2024-10-01 15:57:56.999255] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999258] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999261] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1265760) 00:23:46.889 [2024-10-01 15:57:56.999266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.889 [2024-10-01 15:57:56.999276] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5480, cid 0, qid 0 00:23:46.889 [2024-10-01 15:57:56.999336] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.889 [2024-10-01 15:57:56.999342] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.889 [2024-10-01 15:57:56.999345] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999348] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5480) on tqpair=0x1265760 00:23:46.889 [2024-10-01 15:57:56.999352] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:46.889 [2024-10-01 15:57:56.999360] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999364] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999367] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1265760) 00:23:46.889 [2024-10-01 15:57:56.999372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.889 [2024-10-01 15:57:56.999381] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5480, cid 0, qid 0 00:23:46.889 [2024-10-01 15:57:56.999446] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.889 [2024-10-01 15:57:56.999451] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.889 [2024-10-01 15:57:56.999454] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999457] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5480) on tqpair=0x1265760 00:23:46.889 [2024-10-01 15:57:56.999462] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:46.889 [2024-10-01 15:57:56.999466] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:46.889 [2024-10-01 15:57:56.999472] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:46.889 [2024-10-01 15:57:56.999577] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:46.889 [2024-10-01 15:57:56.999581] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:46.889 [2024-10-01 15:57:56.999590] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999594] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999597] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1265760) 00:23:46.889 [2024-10-01 15:57:56.999602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.889 [2024-10-01 15:57:56.999611] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5480, cid 0, qid 0 00:23:46.889 [2024-10-01 15:57:56.999674] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.889 [2024-10-01 15:57:56.999680] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.889 [2024-10-01 15:57:56.999683] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999686] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5480) on tqpair=0x1265760 00:23:46.889 [2024-10-01 15:57:56.999690] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:46.889 [2024-10-01 15:57:56.999698] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999701] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999705] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1265760) 00:23:46.889 [2024-10-01 15:57:56.999710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.889 [2024-10-01 15:57:56.999719] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5480, cid 0, qid 0 00:23:46.889 [2024-10-01 15:57:56.999785] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.889 [2024-10-01 15:57:56.999790] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.889 [2024-10-01 15:57:56.999793] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999796] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5480) on tqpair=0x1265760 00:23:46.889 [2024-10-01 15:57:56.999800] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:46.889 [2024-10-01 15:57:56.999804] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:46.889 [2024-10-01 15:57:56.999811] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:46.889 [2024-10-01 15:57:56.999818] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:46.889 [2024-10-01 15:57:56.999826] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999829] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1265760) 00:23:46.889 [2024-10-01 15:57:56.999835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.889 [2024-10-01 15:57:56.999844] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5480, cid 0, qid 0 00:23:46.889 [2024-10-01 15:57:56.999942] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.889 [2024-10-01 15:57:56.999948] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.889 [2024-10-01 15:57:56.999951] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999955] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1265760): datao=0, datal=4096, cccid=0 00:23:46.889 [2024-10-01 15:57:56.999959] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12c5480) on tqpair(0x1265760): expected_datao=0, payload_size=4096 00:23:46.889 [2024-10-01 15:57:56.999965] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999971] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999975] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:56.999992] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.889 [2024-10-01 15:57:56.999997] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.889 [2024-10-01 15:57:57.000000] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.889 [2024-10-01 15:57:57.000004] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5480) on tqpair=0x1265760 00:23:46.889 [2024-10-01 15:57:57.000010] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:46.889 [2024-10-01 15:57:57.000015] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:46.889 [2024-10-01 15:57:57.000019] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:46.890 [2024-10-01 15:57:57.000024] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:46.890 [2024-10-01 15:57:57.000028] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:46.890 [2024-10-01 15:57:57.000032] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:46.890 [2024-10-01 15:57:57.000040] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:46.890 [2024-10-01 15:57:57.000046] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000049] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000053] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1265760) 00:23:46.890 [2024-10-01 15:57:57.000059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:46.890 [2024-10-01 15:57:57.000069] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5480, cid 0, qid 0 00:23:46.890 [2024-10-01 15:57:57.000138] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.890 [2024-10-01 15:57:57.000144] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.890 [2024-10-01 15:57:57.000147] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000150] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5480) on tqpair=0x1265760 00:23:46.890 [2024-10-01 15:57:57.000157] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000160] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000163] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1265760) 00:23:46.890 [2024-10-01 15:57:57.000168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.890 [2024-10-01 15:57:57.000174] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000177] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000180] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1265760) 00:23:46.890 [2024-10-01 15:57:57.000185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.890 [2024-10-01 15:57:57.000190] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000193] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000196] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1265760) 00:23:46.890 [2024-10-01 15:57:57.000203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.890 [2024-10-01 15:57:57.000208] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000211] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000215] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:46.890 [2024-10-01 15:57:57.000219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.890 [2024-10-01 15:57:57.000224] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:46.890 [2024-10-01 15:57:57.000234] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:46.890 [2024-10-01 15:57:57.000239] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000242] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1265760) 00:23:46.890 [2024-10-01 15:57:57.000248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.890 [2024-10-01 15:57:57.000259] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5480, cid 0, qid 0 00:23:46.890 [2024-10-01 15:57:57.000263] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5600, cid 1, qid 0 00:23:46.890 [2024-10-01 15:57:57.000267] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5780, cid 2, qid 0 00:23:46.890 [2024-10-01 15:57:57.000271] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:46.890 [2024-10-01 15:57:57.000275] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5a80, cid 4, qid 0 00:23:46.890 [2024-10-01 15:57:57.000369] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.890 [2024-10-01 15:57:57.000375] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.890 [2024-10-01 15:57:57.000378] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000382] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5a80) on tqpair=0x1265760 00:23:46.890 [2024-10-01 15:57:57.000386] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:46.890 [2024-10-01 15:57:57.000391] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:46.890 [2024-10-01 15:57:57.000400] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000403] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1265760) 00:23:46.890 [2024-10-01 15:57:57.000409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.890 [2024-10-01 15:57:57.000418] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5a80, cid 4, qid 0 00:23:46.890 [2024-10-01 15:57:57.000490] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.890 [2024-10-01 15:57:57.000496] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.890 [2024-10-01 15:57:57.000499] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000502] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1265760): datao=0, datal=4096, cccid=4 00:23:46.890 [2024-10-01 15:57:57.000506] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12c5a80) on tqpair(0x1265760): expected_datao=0, payload_size=4096 00:23:46.890 [2024-10-01 15:57:57.000510] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000519] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.000523] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.044869] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.890 [2024-10-01 15:57:57.044879] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.890 [2024-10-01 15:57:57.044883] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.044886] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5a80) on tqpair=0x1265760 00:23:46.890 [2024-10-01 15:57:57.044901] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:46.890 [2024-10-01 15:57:57.044927] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.044932] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1265760) 00:23:46.890 [2024-10-01 15:57:57.044939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.890 [2024-10-01 15:57:57.044945] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.044949] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.044952] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1265760) 00:23:46.890 [2024-10-01 15:57:57.044957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.890 [2024-10-01 15:57:57.044970] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5a80, cid 4, qid 0 00:23:46.890 [2024-10-01 15:57:57.044975] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5c00, cid 5, qid 0 00:23:46.890 [2024-10-01 15:57:57.045155] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.890 [2024-10-01 15:57:57.045161] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.890 [2024-10-01 15:57:57.045164] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.045167] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1265760): datao=0, datal=1024, cccid=4 00:23:46.890 [2024-10-01 15:57:57.045171] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12c5a80) on tqpair(0x1265760): expected_datao=0, payload_size=1024 00:23:46.890 [2024-10-01 15:57:57.045175] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.045180] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.045184] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.045189] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.890 [2024-10-01 15:57:57.045193] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.890 [2024-10-01 15:57:57.045196] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.890 [2024-10-01 15:57:57.045200] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5c00) on tqpair=0x1265760 00:23:47.159 [2024-10-01 15:57:57.087003] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.159 [2024-10-01 15:57:57.087015] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.159 [2024-10-01 15:57:57.087018] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.159 [2024-10-01 15:57:57.087022] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5a80) on tqpair=0x1265760 00:23:47.159 [2024-10-01 15:57:57.087041] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.159 [2024-10-01 15:57:57.087045] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1265760) 00:23:47.159 [2024-10-01 15:57:57.087053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.159 [2024-10-01 15:57:57.087070] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5a80, cid 4, qid 0 00:23:47.159 [2024-10-01 15:57:57.087147] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.159 [2024-10-01 15:57:57.087153] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.159 [2024-10-01 15:57:57.087158] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.159 [2024-10-01 15:57:57.087162] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1265760): datao=0, datal=3072, cccid=4 00:23:47.159 [2024-10-01 15:57:57.087166] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12c5a80) on tqpair(0x1265760): expected_datao=0, payload_size=3072 00:23:47.159 [2024-10-01 15:57:57.087170] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.159 [2024-10-01 15:57:57.087176] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.159 [2024-10-01 15:57:57.087179] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.159 [2024-10-01 15:57:57.087203] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.159 [2024-10-01 15:57:57.087208] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.159 [2024-10-01 15:57:57.087211] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.159 [2024-10-01 15:57:57.087214] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5a80) on tqpair=0x1265760 00:23:47.159 [2024-10-01 15:57:57.087222] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.159 [2024-10-01 15:57:57.087226] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1265760) 00:23:47.159 [2024-10-01 15:57:57.087231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.159 [2024-10-01 15:57:57.087245] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5a80, cid 4, qid 0 00:23:47.159 [2024-10-01 15:57:57.087317] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.159 [2024-10-01 15:57:57.087323] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.159 [2024-10-01 15:57:57.087326] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.159 [2024-10-01 15:57:57.087329] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1265760): datao=0, datal=8, cccid=4 00:23:47.159 [2024-10-01 15:57:57.087333] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12c5a80) on tqpair(0x1265760): expected_datao=0, payload_size=8 00:23:47.159 [2024-10-01 15:57:57.087336] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.159 [2024-10-01 15:57:57.087342] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.159 [2024-10-01 15:57:57.087345] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.159 [2024-10-01 15:57:57.131874] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.159 [2024-10-01 15:57:57.131883] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.159 [2024-10-01 15:57:57.131886] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.159 [2024-10-01 15:57:57.131890] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5a80) on tqpair=0x1265760 00:23:47.159 ===================================================== 00:23:47.159 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:47.159 ===================================================== 00:23:47.159 Controller Capabilities/Features 00:23:47.159 ================================ 00:23:47.159 Vendor ID: 0000 00:23:47.159 Subsystem Vendor ID: 0000 00:23:47.159 Serial Number: .................... 00:23:47.159 Model Number: ........................................ 00:23:47.159 Firmware Version: 25.01 00:23:47.159 Recommended Arb Burst: 0 00:23:47.159 IEEE OUI Identifier: 00 00 00 00:23:47.159 Multi-path I/O 00:23:47.159 May have multiple subsystem ports: No 00:23:47.159 May have multiple controllers: No 00:23:47.159 Associated with SR-IOV VF: No 00:23:47.159 Max Data Transfer Size: 131072 00:23:47.159 Max Number of Namespaces: 0 00:23:47.159 Max Number of I/O Queues: 1024 00:23:47.159 NVMe Specification Version (VS): 1.3 00:23:47.159 NVMe Specification Version (Identify): 1.3 00:23:47.159 Maximum Queue Entries: 128 00:23:47.159 Contiguous Queues Required: Yes 00:23:47.159 Arbitration Mechanisms Supported 00:23:47.159 Weighted Round Robin: Not Supported 00:23:47.159 Vendor Specific: Not Supported 00:23:47.159 Reset Timeout: 15000 ms 00:23:47.159 Doorbell Stride: 4 bytes 00:23:47.159 NVM Subsystem Reset: Not Supported 00:23:47.159 Command Sets Supported 00:23:47.159 NVM Command Set: Supported 00:23:47.159 Boot Partition: Not Supported 00:23:47.159 Memory Page Size Minimum: 4096 bytes 00:23:47.159 Memory Page Size Maximum: 4096 bytes 00:23:47.159 Persistent Memory Region: Not Supported 00:23:47.159 Optional Asynchronous Events Supported 00:23:47.159 Namespace Attribute Notices: Not Supported 00:23:47.159 Firmware Activation Notices: Not Supported 00:23:47.159 ANA Change Notices: Not Supported 00:23:47.159 PLE Aggregate Log Change Notices: Not Supported 00:23:47.159 LBA Status Info Alert Notices: Not Supported 00:23:47.159 EGE Aggregate Log Change Notices: Not Supported 00:23:47.159 Normal NVM Subsystem Shutdown event: Not Supported 00:23:47.159 Zone Descriptor Change Notices: Not Supported 00:23:47.159 Discovery Log Change Notices: Supported 00:23:47.159 Controller Attributes 00:23:47.159 128-bit Host Identifier: Not Supported 00:23:47.159 Non-Operational Permissive Mode: Not Supported 00:23:47.159 NVM Sets: Not Supported 00:23:47.159 Read Recovery Levels: Not Supported 00:23:47.159 Endurance Groups: Not Supported 00:23:47.159 Predictable Latency Mode: Not Supported 00:23:47.159 Traffic Based Keep ALive: Not Supported 00:23:47.159 Namespace Granularity: Not Supported 00:23:47.159 SQ Associations: Not Supported 00:23:47.159 UUID List: Not Supported 00:23:47.159 Multi-Domain Subsystem: Not Supported 00:23:47.159 Fixed Capacity Management: Not Supported 00:23:47.159 Variable Capacity Management: Not Supported 00:23:47.159 Delete Endurance Group: Not Supported 00:23:47.159 Delete NVM Set: Not Supported 00:23:47.159 Extended LBA Formats Supported: Not Supported 00:23:47.159 Flexible Data Placement Supported: Not Supported 00:23:47.159 00:23:47.159 Controller Memory Buffer Support 00:23:47.159 ================================ 00:23:47.159 Supported: No 00:23:47.159 00:23:47.159 Persistent Memory Region Support 00:23:47.159 ================================ 00:23:47.159 Supported: No 00:23:47.159 00:23:47.159 Admin Command Set Attributes 00:23:47.159 ============================ 00:23:47.159 Security Send/Receive: Not Supported 00:23:47.159 Format NVM: Not Supported 00:23:47.159 Firmware Activate/Download: Not Supported 00:23:47.159 Namespace Management: Not Supported 00:23:47.159 Device Self-Test: Not Supported 00:23:47.159 Directives: Not Supported 00:23:47.159 NVMe-MI: Not Supported 00:23:47.159 Virtualization Management: Not Supported 00:23:47.159 Doorbell Buffer Config: Not Supported 00:23:47.159 Get LBA Status Capability: Not Supported 00:23:47.159 Command & Feature Lockdown Capability: Not Supported 00:23:47.159 Abort Command Limit: 1 00:23:47.159 Async Event Request Limit: 4 00:23:47.159 Number of Firmware Slots: N/A 00:23:47.159 Firmware Slot 1 Read-Only: N/A 00:23:47.159 Firmware Activation Without Reset: N/A 00:23:47.159 Multiple Update Detection Support: N/A 00:23:47.159 Firmware Update Granularity: No Information Provided 00:23:47.159 Per-Namespace SMART Log: No 00:23:47.159 Asymmetric Namespace Access Log Page: Not Supported 00:23:47.159 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:47.159 Command Effects Log Page: Not Supported 00:23:47.159 Get Log Page Extended Data: Supported 00:23:47.159 Telemetry Log Pages: Not Supported 00:23:47.159 Persistent Event Log Pages: Not Supported 00:23:47.159 Supported Log Pages Log Page: May Support 00:23:47.159 Commands Supported & Effects Log Page: Not Supported 00:23:47.159 Feature Identifiers & Effects Log Page:May Support 00:23:47.159 NVMe-MI Commands & Effects Log Page: May Support 00:23:47.159 Data Area 4 for Telemetry Log: Not Supported 00:23:47.159 Error Log Page Entries Supported: 128 00:23:47.159 Keep Alive: Not Supported 00:23:47.159 00:23:47.159 NVM Command Set Attributes 00:23:47.159 ========================== 00:23:47.159 Submission Queue Entry Size 00:23:47.159 Max: 1 00:23:47.159 Min: 1 00:23:47.159 Completion Queue Entry Size 00:23:47.159 Max: 1 00:23:47.159 Min: 1 00:23:47.159 Number of Namespaces: 0 00:23:47.159 Compare Command: Not Supported 00:23:47.159 Write Uncorrectable Command: Not Supported 00:23:47.159 Dataset Management Command: Not Supported 00:23:47.159 Write Zeroes Command: Not Supported 00:23:47.159 Set Features Save Field: Not Supported 00:23:47.160 Reservations: Not Supported 00:23:47.160 Timestamp: Not Supported 00:23:47.160 Copy: Not Supported 00:23:47.160 Volatile Write Cache: Not Present 00:23:47.160 Atomic Write Unit (Normal): 1 00:23:47.160 Atomic Write Unit (PFail): 1 00:23:47.160 Atomic Compare & Write Unit: 1 00:23:47.160 Fused Compare & Write: Supported 00:23:47.160 Scatter-Gather List 00:23:47.160 SGL Command Set: Supported 00:23:47.160 SGL Keyed: Supported 00:23:47.160 SGL Bit Bucket Descriptor: Not Supported 00:23:47.160 SGL Metadata Pointer: Not Supported 00:23:47.160 Oversized SGL: Not Supported 00:23:47.160 SGL Metadata Address: Not Supported 00:23:47.160 SGL Offset: Supported 00:23:47.160 Transport SGL Data Block: Not Supported 00:23:47.160 Replay Protected Memory Block: Not Supported 00:23:47.160 00:23:47.160 Firmware Slot Information 00:23:47.160 ========================= 00:23:47.160 Active slot: 0 00:23:47.160 00:23:47.160 00:23:47.160 Error Log 00:23:47.160 ========= 00:23:47.160 00:23:47.160 Active Namespaces 00:23:47.160 ================= 00:23:47.160 Discovery Log Page 00:23:47.160 ================== 00:23:47.160 Generation Counter: 2 00:23:47.160 Number of Records: 2 00:23:47.160 Record Format: 0 00:23:47.160 00:23:47.160 Discovery Log Entry 0 00:23:47.160 ---------------------- 00:23:47.160 Transport Type: 3 (TCP) 00:23:47.160 Address Family: 1 (IPv4) 00:23:47.160 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:47.160 Entry Flags: 00:23:47.160 Duplicate Returned Information: 1 00:23:47.160 Explicit Persistent Connection Support for Discovery: 1 00:23:47.160 Transport Requirements: 00:23:47.160 Secure Channel: Not Required 00:23:47.160 Port ID: 0 (0x0000) 00:23:47.160 Controller ID: 65535 (0xffff) 00:23:47.160 Admin Max SQ Size: 128 00:23:47.160 Transport Service Identifier: 4420 00:23:47.160 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:47.160 Transport Address: 10.0.0.2 00:23:47.160 Discovery Log Entry 1 00:23:47.160 ---------------------- 00:23:47.160 Transport Type: 3 (TCP) 00:23:47.160 Address Family: 1 (IPv4) 00:23:47.160 Subsystem Type: 2 (NVM Subsystem) 00:23:47.160 Entry Flags: 00:23:47.160 Duplicate Returned Information: 0 00:23:47.160 Explicit Persistent Connection Support for Discovery: 0 00:23:47.160 Transport Requirements: 00:23:47.160 Secure Channel: Not Required 00:23:47.160 Port ID: 0 (0x0000) 00:23:47.160 Controller ID: 65535 (0xffff) 00:23:47.160 Admin Max SQ Size: 128 00:23:47.160 Transport Service Identifier: 4420 00:23:47.160 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:47.160 Transport Address: 10.0.0.2 [2024-10-01 15:57:57.131963] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:47.160 [2024-10-01 15:57:57.131972] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5480) on tqpair=0x1265760 00:23:47.160 [2024-10-01 15:57:57.131979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-10-01 15:57:57.131984] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5600) on tqpair=0x1265760 00:23:47.160 [2024-10-01 15:57:57.131988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-10-01 15:57:57.131992] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5780) on tqpair=0x1265760 00:23:47.160 [2024-10-01 15:57:57.131996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-10-01 15:57:57.132000] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.160 [2024-10-01 15:57:57.132004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-10-01 15:57:57.132013] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132017] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132020] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.160 [2024-10-01 15:57:57.132027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-10-01 15:57:57.132040] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.160 [2024-10-01 15:57:57.132116] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.160 [2024-10-01 15:57:57.132122] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.160 [2024-10-01 15:57:57.132125] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132129] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.160 [2024-10-01 15:57:57.132135] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132138] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132141] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.160 [2024-10-01 15:57:57.132147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-10-01 15:57:57.132158] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.160 [2024-10-01 15:57:57.132241] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.160 [2024-10-01 15:57:57.132246] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.160 [2024-10-01 15:57:57.132249] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132252] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.160 [2024-10-01 15:57:57.132257] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:47.160 [2024-10-01 15:57:57.132263] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:47.160 [2024-10-01 15:57:57.132271] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132275] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132278] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.160 [2024-10-01 15:57:57.132283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-10-01 15:57:57.132293] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.160 [2024-10-01 15:57:57.132360] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.160 [2024-10-01 15:57:57.132365] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.160 [2024-10-01 15:57:57.132368] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132372] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.160 [2024-10-01 15:57:57.132380] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132384] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132387] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.160 [2024-10-01 15:57:57.132392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-10-01 15:57:57.132401] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.160 [2024-10-01 15:57:57.132476] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.160 [2024-10-01 15:57:57.132481] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.160 [2024-10-01 15:57:57.132486] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132489] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.160 [2024-10-01 15:57:57.132497] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132501] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132504] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.160 [2024-10-01 15:57:57.132509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-10-01 15:57:57.132518] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.160 [2024-10-01 15:57:57.132576] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.160 [2024-10-01 15:57:57.132581] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.160 [2024-10-01 15:57:57.132585] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132588] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.160 [2024-10-01 15:57:57.132595] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132599] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132602] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.160 [2024-10-01 15:57:57.132607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-10-01 15:57:57.132616] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.160 [2024-10-01 15:57:57.132674] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.160 [2024-10-01 15:57:57.132679] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.160 [2024-10-01 15:57:57.132682] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132685] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.160 [2024-10-01 15:57:57.132693] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132697] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.160 [2024-10-01 15:57:57.132700] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.160 [2024-10-01 15:57:57.132705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-10-01 15:57:57.132714] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.160 [2024-10-01 15:57:57.132801] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.161 [2024-10-01 15:57:57.132807] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.161 [2024-10-01 15:57:57.132809] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.132813] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.161 [2024-10-01 15:57:57.132820] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.132824] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.132827] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.161 [2024-10-01 15:57:57.132832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-10-01 15:57:57.132842] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.161 [2024-10-01 15:57:57.132911] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.161 [2024-10-01 15:57:57.132917] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.161 [2024-10-01 15:57:57.132920] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.132925] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.161 [2024-10-01 15:57:57.132933] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.132936] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.132939] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.161 [2024-10-01 15:57:57.132945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-10-01 15:57:57.132954] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.161 [2024-10-01 15:57:57.133034] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.161 [2024-10-01 15:57:57.133039] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.161 [2024-10-01 15:57:57.133042] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133045] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.161 [2024-10-01 15:57:57.133053] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133057] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133060] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.161 [2024-10-01 15:57:57.133065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-10-01 15:57:57.133074] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.161 [2024-10-01 15:57:57.133135] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.161 [2024-10-01 15:57:57.133140] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.161 [2024-10-01 15:57:57.133143] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133146] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.161 [2024-10-01 15:57:57.133154] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133157] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133160] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.161 [2024-10-01 15:57:57.133166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-10-01 15:57:57.133175] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.161 [2024-10-01 15:57:57.133234] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.161 [2024-10-01 15:57:57.133239] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.161 [2024-10-01 15:57:57.133242] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133245] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.161 [2024-10-01 15:57:57.133253] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133257] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133260] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.161 [2024-10-01 15:57:57.133265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-10-01 15:57:57.133274] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.161 [2024-10-01 15:57:57.133342] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.161 [2024-10-01 15:57:57.133347] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.161 [2024-10-01 15:57:57.133350] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133354] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.161 [2024-10-01 15:57:57.133363] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133367] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133370] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.161 [2024-10-01 15:57:57.133375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-10-01 15:57:57.133384] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.161 [2024-10-01 15:57:57.133446] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.161 [2024-10-01 15:57:57.133452] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.161 [2024-10-01 15:57:57.133455] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133458] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.161 [2024-10-01 15:57:57.133466] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133470] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133473] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.161 [2024-10-01 15:57:57.133478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-10-01 15:57:57.133487] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.161 [2024-10-01 15:57:57.133547] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.161 [2024-10-01 15:57:57.133553] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.161 [2024-10-01 15:57:57.133556] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133559] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.161 [2024-10-01 15:57:57.133566] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133570] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133573] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.161 [2024-10-01 15:57:57.133578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-10-01 15:57:57.133587] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.161 [2024-10-01 15:57:57.133645] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.161 [2024-10-01 15:57:57.133651] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.161 [2024-10-01 15:57:57.133653] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133657] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.161 [2024-10-01 15:57:57.133664] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133668] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133671] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.161 [2024-10-01 15:57:57.133676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-10-01 15:57:57.133686] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.161 [2024-10-01 15:57:57.133746] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.161 [2024-10-01 15:57:57.133751] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.161 [2024-10-01 15:57:57.133754] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133758] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.161 [2024-10-01 15:57:57.133765] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133772] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133775] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.161 [2024-10-01 15:57:57.133781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-10-01 15:57:57.133790] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.161 [2024-10-01 15:57:57.133858] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.161 [2024-10-01 15:57:57.133869] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.161 [2024-10-01 15:57:57.133873] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133876] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.161 [2024-10-01 15:57:57.133884] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133888] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133891] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.161 [2024-10-01 15:57:57.133896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-10-01 15:57:57.133906] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.161 [2024-10-01 15:57:57.133965] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.161 [2024-10-01 15:57:57.133971] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.161 [2024-10-01 15:57:57.133974] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133977] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.161 [2024-10-01 15:57:57.133985] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133988] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.161 [2024-10-01 15:57:57.133991] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.161 [2024-10-01 15:57:57.133997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-10-01 15:57:57.134006] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.161 [2024-10-01 15:57:57.134074] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.161 [2024-10-01 15:57:57.134079] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.162 [2024-10-01 15:57:57.134082] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134086] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.162 [2024-10-01 15:57:57.134093] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134097] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134100] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.162 [2024-10-01 15:57:57.134105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-10-01 15:57:57.134114] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.162 [2024-10-01 15:57:57.134170] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.162 [2024-10-01 15:57:57.134175] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.162 [2024-10-01 15:57:57.134178] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134182] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.162 [2024-10-01 15:57:57.134189] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134193] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134197] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.162 [2024-10-01 15:57:57.134203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-10-01 15:57:57.134211] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.162 [2024-10-01 15:57:57.134270] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.162 [2024-10-01 15:57:57.134276] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.162 [2024-10-01 15:57:57.134279] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134282] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.162 [2024-10-01 15:57:57.134289] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134293] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134296] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.162 [2024-10-01 15:57:57.134302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-10-01 15:57:57.134310] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.162 [2024-10-01 15:57:57.134368] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.162 [2024-10-01 15:57:57.134374] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.162 [2024-10-01 15:57:57.134377] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134380] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.162 [2024-10-01 15:57:57.134388] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134391] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134394] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.162 [2024-10-01 15:57:57.134400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-10-01 15:57:57.134409] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.162 [2024-10-01 15:57:57.134468] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.162 [2024-10-01 15:57:57.134474] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.162 [2024-10-01 15:57:57.134477] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134480] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.162 [2024-10-01 15:57:57.134488] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134491] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134494] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.162 [2024-10-01 15:57:57.134500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-10-01 15:57:57.134509] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.162 [2024-10-01 15:57:57.134569] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.162 [2024-10-01 15:57:57.134575] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.162 [2024-10-01 15:57:57.134578] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134581] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.162 [2024-10-01 15:57:57.134589] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134592] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134595] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.162 [2024-10-01 15:57:57.134603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-10-01 15:57:57.134612] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.162 [2024-10-01 15:57:57.134679] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.162 [2024-10-01 15:57:57.134684] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.162 [2024-10-01 15:57:57.134687] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134690] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.162 [2024-10-01 15:57:57.134699] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134702] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134705] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.162 [2024-10-01 15:57:57.134711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-10-01 15:57:57.134720] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.162 [2024-10-01 15:57:57.134777] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.162 [2024-10-01 15:57:57.134783] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.162 [2024-10-01 15:57:57.134786] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134789] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.162 [2024-10-01 15:57:57.134797] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134800] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134803] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.162 [2024-10-01 15:57:57.134809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-10-01 15:57:57.134818] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.162 [2024-10-01 15:57:57.134887] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.162 [2024-10-01 15:57:57.134892] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.162 [2024-10-01 15:57:57.134895] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134899] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.162 [2024-10-01 15:57:57.134906] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134910] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.134913] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.162 [2024-10-01 15:57:57.134918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-10-01 15:57:57.134928] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.162 [2024-10-01 15:57:57.135001] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.162 [2024-10-01 15:57:57.135006] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.162 [2024-10-01 15:57:57.135009] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.135013] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.162 [2024-10-01 15:57:57.135022] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.135025] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.135028] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.162 [2024-10-01 15:57:57.135033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-10-01 15:57:57.135044] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.162 [2024-10-01 15:57:57.135106] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.162 [2024-10-01 15:57:57.135112] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.162 [2024-10-01 15:57:57.135115] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.135118] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.162 [2024-10-01 15:57:57.135126] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.135129] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.135132] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.162 [2024-10-01 15:57:57.135138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-10-01 15:57:57.135147] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.162 [2024-10-01 15:57:57.135215] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.162 [2024-10-01 15:57:57.135221] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.162 [2024-10-01 15:57:57.135224] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.135227] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.162 [2024-10-01 15:57:57.135236] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.135239] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.162 [2024-10-01 15:57:57.135242] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.162 [2024-10-01 15:57:57.135248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-10-01 15:57:57.135257] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.162 [2024-10-01 15:57:57.135325] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.162 [2024-10-01 15:57:57.135331] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.163 [2024-10-01 15:57:57.135334] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.135337] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.163 [2024-10-01 15:57:57.135345] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.135348] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.135351] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.163 [2024-10-01 15:57:57.135357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-10-01 15:57:57.135365] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.163 [2024-10-01 15:57:57.135427] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.163 [2024-10-01 15:57:57.135432] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.163 [2024-10-01 15:57:57.135435] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.135438] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.163 [2024-10-01 15:57:57.135446] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.135450] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.135453] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.163 [2024-10-01 15:57:57.135458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-10-01 15:57:57.135469] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.163 [2024-10-01 15:57:57.135535] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.163 [2024-10-01 15:57:57.135541] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.163 [2024-10-01 15:57:57.135544] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.135547] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.163 [2024-10-01 15:57:57.135555] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.135559] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.135562] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.163 [2024-10-01 15:57:57.135568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-10-01 15:57:57.135577] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.163 [2024-10-01 15:57:57.138869] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.163 [2024-10-01 15:57:57.138877] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.163 [2024-10-01 15:57:57.138880] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.138883] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.163 [2024-10-01 15:57:57.138894] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.138897] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.138900] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1265760) 00:23:47.163 [2024-10-01 15:57:57.138906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-10-01 15:57:57.138917] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12c5900, cid 3, qid 0 00:23:47.163 [2024-10-01 15:57:57.139065] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.163 [2024-10-01 15:57:57.139071] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.163 [2024-10-01 15:57:57.139074] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.139077] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12c5900) on tqpair=0x1265760 00:23:47.163 [2024-10-01 15:57:57.139083] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:47.163 00:23:47.163 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:47.163 [2024-10-01 15:57:57.175543] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:23:47.163 [2024-10-01 15:57:57.175576] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517147 ] 00:23:47.163 [2024-10-01 15:57:57.201354] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:47.163 [2024-10-01 15:57:57.201392] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:47.163 [2024-10-01 15:57:57.201397] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:47.163 [2024-10-01 15:57:57.201406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:47.163 [2024-10-01 15:57:57.201414] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:47.163 [2024-10-01 15:57:57.205050] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:47.163 [2024-10-01 15:57:57.205076] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe6e760 0 00:23:47.163 [2024-10-01 15:57:57.212880] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:47.163 [2024-10-01 15:57:57.212895] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:47.163 [2024-10-01 15:57:57.212899] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:47.163 [2024-10-01 15:57:57.212902] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:47.163 [2024-10-01 15:57:57.212925] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.212930] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.212933] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe6e760) 00:23:47.163 [2024-10-01 15:57:57.212943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:47.163 [2024-10-01 15:57:57.212960] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece480, cid 0, qid 0 00:23:47.163 [2024-10-01 15:57:57.220873] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.163 [2024-10-01 15:57:57.220881] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.163 [2024-10-01 15:57:57.220884] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.220888] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece480) on tqpair=0xe6e760 00:23:47.163 [2024-10-01 15:57:57.220896] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:47.163 [2024-10-01 15:57:57.220902] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:47.163 [2024-10-01 15:57:57.220906] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:47.163 [2024-10-01 15:57:57.220917] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.220921] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.220924] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe6e760) 00:23:47.163 [2024-10-01 15:57:57.220930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-10-01 15:57:57.220943] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece480, cid 0, qid 0 00:23:47.163 [2024-10-01 15:57:57.221074] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.163 [2024-10-01 15:57:57.221080] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.163 [2024-10-01 15:57:57.221083] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.221087] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece480) on tqpair=0xe6e760 00:23:47.163 [2024-10-01 15:57:57.221091] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:47.163 [2024-10-01 15:57:57.221097] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:47.163 [2024-10-01 15:57:57.221104] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.221107] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.221111] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe6e760) 00:23:47.163 [2024-10-01 15:57:57.221116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-10-01 15:57:57.221126] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece480, cid 0, qid 0 00:23:47.163 [2024-10-01 15:57:57.221190] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.163 [2024-10-01 15:57:57.221198] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.163 [2024-10-01 15:57:57.221201] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.163 [2024-10-01 15:57:57.221204] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece480) on tqpair=0xe6e760 00:23:47.164 [2024-10-01 15:57:57.221209] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:47.164 [2024-10-01 15:57:57.221216] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:47.164 [2024-10-01 15:57:57.221222] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221225] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221228] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe6e760) 00:23:47.164 [2024-10-01 15:57:57.221234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-10-01 15:57:57.221244] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece480, cid 0, qid 0 00:23:47.164 [2024-10-01 15:57:57.221305] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.164 [2024-10-01 15:57:57.221310] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.164 [2024-10-01 15:57:57.221313] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221316] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece480) on tqpair=0xe6e760 00:23:47.164 [2024-10-01 15:57:57.221321] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:47.164 [2024-10-01 15:57:57.221329] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221332] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221336] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe6e760) 00:23:47.164 [2024-10-01 15:57:57.221341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-10-01 15:57:57.221351] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece480, cid 0, qid 0 00:23:47.164 [2024-10-01 15:57:57.221419] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.164 [2024-10-01 15:57:57.221425] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.164 [2024-10-01 15:57:57.221428] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221431] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece480) on tqpair=0xe6e760 00:23:47.164 [2024-10-01 15:57:57.221435] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:47.164 [2024-10-01 15:57:57.221439] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:47.164 [2024-10-01 15:57:57.221445] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:47.164 [2024-10-01 15:57:57.221550] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:47.164 [2024-10-01 15:57:57.221554] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:47.164 [2024-10-01 15:57:57.221560] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221563] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe6e760) 00:23:47.164 [2024-10-01 15:57:57.221572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-10-01 15:57:57.221583] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece480, cid 0, qid 0 00:23:47.164 [2024-10-01 15:57:57.221646] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.164 [2024-10-01 15:57:57.221652] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.164 [2024-10-01 15:57:57.221655] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221658] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece480) on tqpair=0xe6e760 00:23:47.164 [2024-10-01 15:57:57.221662] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:47.164 [2024-10-01 15:57:57.221670] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221674] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221677] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe6e760) 00:23:47.164 [2024-10-01 15:57:57.221682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-10-01 15:57:57.221692] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece480, cid 0, qid 0 00:23:47.164 [2024-10-01 15:57:57.221755] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.164 [2024-10-01 15:57:57.221760] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.164 [2024-10-01 15:57:57.221763] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221767] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece480) on tqpair=0xe6e760 00:23:47.164 [2024-10-01 15:57:57.221770] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:47.164 [2024-10-01 15:57:57.221774] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:47.164 [2024-10-01 15:57:57.221781] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:47.164 [2024-10-01 15:57:57.221787] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:47.164 [2024-10-01 15:57:57.221795] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221798] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe6e760) 00:23:47.164 [2024-10-01 15:57:57.221804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-10-01 15:57:57.221814] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece480, cid 0, qid 0 00:23:47.164 [2024-10-01 15:57:57.221907] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.164 [2024-10-01 15:57:57.221914] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.164 [2024-10-01 15:57:57.221917] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221921] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe6e760): datao=0, datal=4096, cccid=0 00:23:47.164 [2024-10-01 15:57:57.221925] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xece480) on tqpair(0xe6e760): expected_datao=0, payload_size=4096 00:23:47.164 [2024-10-01 15:57:57.221928] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221941] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.221945] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.262999] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.164 [2024-10-01 15:57:57.263008] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.164 [2024-10-01 15:57:57.263011] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.263015] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece480) on tqpair=0xe6e760 00:23:47.164 [2024-10-01 15:57:57.263025] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:47.164 [2024-10-01 15:57:57.263030] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:47.164 [2024-10-01 15:57:57.263034] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:47.164 [2024-10-01 15:57:57.263037] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:47.164 [2024-10-01 15:57:57.263041] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:47.164 [2024-10-01 15:57:57.263046] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:47.164 [2024-10-01 15:57:57.263054] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:47.164 [2024-10-01 15:57:57.263060] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.263064] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.263067] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe6e760) 00:23:47.164 [2024-10-01 15:57:57.263074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:47.164 [2024-10-01 15:57:57.263086] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece480, cid 0, qid 0 00:23:47.164 [2024-10-01 15:57:57.263147] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.164 [2024-10-01 15:57:57.263153] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.164 [2024-10-01 15:57:57.263156] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.263159] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece480) on tqpair=0xe6e760 00:23:47.164 [2024-10-01 15:57:57.263165] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.263168] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.263171] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe6e760) 00:23:47.164 [2024-10-01 15:57:57.263176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.164 [2024-10-01 15:57:57.263181] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.263185] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.164 [2024-10-01 15:57:57.263188] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe6e760) 00:23:47.165 [2024-10-01 15:57:57.263193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.165 [2024-10-01 15:57:57.263198] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263201] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263204] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe6e760) 00:23:47.165 [2024-10-01 15:57:57.263209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.165 [2024-10-01 15:57:57.263214] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263217] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263220] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.165 [2024-10-01 15:57:57.263225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.165 [2024-10-01 15:57:57.263229] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:47.165 [2024-10-01 15:57:57.263241] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:47.165 [2024-10-01 15:57:57.263247] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263250] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe6e760) 00:23:47.165 [2024-10-01 15:57:57.263256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-10-01 15:57:57.263267] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece480, cid 0, qid 0 00:23:47.165 [2024-10-01 15:57:57.263272] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece600, cid 1, qid 0 00:23:47.165 [2024-10-01 15:57:57.263276] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece780, cid 2, qid 0 00:23:47.165 [2024-10-01 15:57:57.263280] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.165 [2024-10-01 15:57:57.263284] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xecea80, cid 4, qid 0 00:23:47.165 [2024-10-01 15:57:57.263378] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.165 [2024-10-01 15:57:57.263384] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.165 [2024-10-01 15:57:57.263387] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263390] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xecea80) on tqpair=0xe6e760 00:23:47.165 [2024-10-01 15:57:57.263395] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:47.165 [2024-10-01 15:57:57.263399] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:47.165 [2024-10-01 15:57:57.263406] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:47.165 [2024-10-01 15:57:57.263413] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:47.165 [2024-10-01 15:57:57.263419] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263423] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263426] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe6e760) 00:23:47.165 [2024-10-01 15:57:57.263431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:47.165 [2024-10-01 15:57:57.263441] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xecea80, cid 4, qid 0 00:23:47.165 [2024-10-01 15:57:57.263503] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.165 [2024-10-01 15:57:57.263509] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.165 [2024-10-01 15:57:57.263512] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263515] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xecea80) on tqpair=0xe6e760 00:23:47.165 [2024-10-01 15:57:57.263567] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:47.165 [2024-10-01 15:57:57.263577] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:47.165 [2024-10-01 15:57:57.263584] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263588] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe6e760) 00:23:47.165 [2024-10-01 15:57:57.263593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-10-01 15:57:57.263602] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xecea80, cid 4, qid 0 00:23:47.165 [2024-10-01 15:57:57.263675] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.165 [2024-10-01 15:57:57.263681] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.165 [2024-10-01 15:57:57.263684] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263687] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe6e760): datao=0, datal=4096, cccid=4 00:23:47.165 [2024-10-01 15:57:57.263691] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xecea80) on tqpair(0xe6e760): expected_datao=0, payload_size=4096 00:23:47.165 [2024-10-01 15:57:57.263695] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263701] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263704] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263728] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.165 [2024-10-01 15:57:57.263733] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.165 [2024-10-01 15:57:57.263736] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263739] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xecea80) on tqpair=0xe6e760 00:23:47.165 [2024-10-01 15:57:57.263747] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:47.165 [2024-10-01 15:57:57.263758] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:47.165 [2024-10-01 15:57:57.263767] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:47.165 [2024-10-01 15:57:57.263773] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263777] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe6e760) 00:23:47.165 [2024-10-01 15:57:57.263782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-10-01 15:57:57.263792] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xecea80, cid 4, qid 0 00:23:47.165 [2024-10-01 15:57:57.263891] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.165 [2024-10-01 15:57:57.263897] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.165 [2024-10-01 15:57:57.263900] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263903] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe6e760): datao=0, datal=4096, cccid=4 00:23:47.165 [2024-10-01 15:57:57.263907] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xecea80) on tqpair(0xe6e760): expected_datao=0, payload_size=4096 00:23:47.165 [2024-10-01 15:57:57.263911] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263916] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263920] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263936] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.165 [2024-10-01 15:57:57.263941] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.165 [2024-10-01 15:57:57.263944] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.165 [2024-10-01 15:57:57.263947] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xecea80) on tqpair=0xe6e760 00:23:47.165 [2024-10-01 15:57:57.263958] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:47.166 [2024-10-01 15:57:57.263967] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:47.166 [2024-10-01 15:57:57.263973] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.263977] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe6e760) 00:23:47.166 [2024-10-01 15:57:57.263983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-10-01 15:57:57.263994] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xecea80, cid 4, qid 0 00:23:47.166 [2024-10-01 15:57:57.264071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.166 [2024-10-01 15:57:57.264077] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.166 [2024-10-01 15:57:57.264080] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264083] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe6e760): datao=0, datal=4096, cccid=4 00:23:47.166 [2024-10-01 15:57:57.264087] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xecea80) on tqpair(0xe6e760): expected_datao=0, payload_size=4096 00:23:47.166 [2024-10-01 15:57:57.264090] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264096] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264099] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264108] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.166 [2024-10-01 15:57:57.264113] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.166 [2024-10-01 15:57:57.264116] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264120] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xecea80) on tqpair=0xe6e760 00:23:47.166 [2024-10-01 15:57:57.264127] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:47.166 [2024-10-01 15:57:57.264134] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:47.166 [2024-10-01 15:57:57.264141] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:47.166 [2024-10-01 15:57:57.264146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:47.166 [2024-10-01 15:57:57.264151] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:47.166 [2024-10-01 15:57:57.264155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:47.166 [2024-10-01 15:57:57.264160] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:47.166 [2024-10-01 15:57:57.264164] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:47.166 [2024-10-01 15:57:57.264169] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:47.166 [2024-10-01 15:57:57.264181] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264185] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe6e760) 00:23:47.166 [2024-10-01 15:57:57.264191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-10-01 15:57:57.264196] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264199] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264203] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe6e760) 00:23:47.166 [2024-10-01 15:57:57.264208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.166 [2024-10-01 15:57:57.264218] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xecea80, cid 4, qid 0 00:23:47.166 [2024-10-01 15:57:57.264225] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xecec00, cid 5, qid 0 00:23:47.166 [2024-10-01 15:57:57.264299] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.166 [2024-10-01 15:57:57.264305] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.166 [2024-10-01 15:57:57.264308] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264311] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xecea80) on tqpair=0xe6e760 00:23:47.166 [2024-10-01 15:57:57.264317] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.166 [2024-10-01 15:57:57.264321] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.166 [2024-10-01 15:57:57.264324] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264328] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xecec00) on tqpair=0xe6e760 00:23:47.166 [2024-10-01 15:57:57.264335] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264339] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe6e760) 00:23:47.166 [2024-10-01 15:57:57.264344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-10-01 15:57:57.264353] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xecec00, cid 5, qid 0 00:23:47.166 [2024-10-01 15:57:57.264420] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.166 [2024-10-01 15:57:57.264426] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.166 [2024-10-01 15:57:57.264429] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264432] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xecec00) on tqpair=0xe6e760 00:23:47.166 [2024-10-01 15:57:57.264440] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264444] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe6e760) 00:23:47.166 [2024-10-01 15:57:57.264449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-10-01 15:57:57.264460] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xecec00, cid 5, qid 0 00:23:47.166 [2024-10-01 15:57:57.264520] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.166 [2024-10-01 15:57:57.264525] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.166 [2024-10-01 15:57:57.264528] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264532] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xecec00) on tqpair=0xe6e760 00:23:47.166 [2024-10-01 15:57:57.264539] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264543] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe6e760) 00:23:47.166 [2024-10-01 15:57:57.264548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-10-01 15:57:57.264558] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xecec00, cid 5, qid 0 00:23:47.166 [2024-10-01 15:57:57.264615] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.166 [2024-10-01 15:57:57.264621] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.166 [2024-10-01 15:57:57.264624] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264627] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xecec00) on tqpair=0xe6e760 00:23:47.166 [2024-10-01 15:57:57.264639] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264643] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe6e760) 00:23:47.166 [2024-10-01 15:57:57.264649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-10-01 15:57:57.264657] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264660] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe6e760) 00:23:47.166 [2024-10-01 15:57:57.264666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-10-01 15:57:57.264672] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264675] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xe6e760) 00:23:47.166 [2024-10-01 15:57:57.264680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-10-01 15:57:57.264688] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.166 [2024-10-01 15:57:57.264692] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe6e760) 00:23:47.166 [2024-10-01 15:57:57.264697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-10-01 15:57:57.264708] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xecec00, cid 5, qid 0 00:23:47.166 [2024-10-01 15:57:57.264712] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xecea80, cid 4, qid 0 00:23:47.166 [2024-10-01 15:57:57.264716] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xeced80, cid 6, qid 0 00:23:47.166 [2024-10-01 15:57:57.264720] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xecef00, cid 7, qid 0 00:23:47.167 [2024-10-01 15:57:57.268881] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.167 [2024-10-01 15:57:57.268890] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.167 [2024-10-01 15:57:57.268893] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.268896] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe6e760): datao=0, datal=8192, cccid=5 00:23:47.167 [2024-10-01 15:57:57.268900] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xecec00) on tqpair(0xe6e760): expected_datao=0, payload_size=8192 00:23:47.167 [2024-10-01 15:57:57.268904] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.268918] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.268922] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.268927] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.167 [2024-10-01 15:57:57.268932] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.167 [2024-10-01 15:57:57.268935] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.268938] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe6e760): datao=0, datal=512, cccid=4 00:23:47.167 [2024-10-01 15:57:57.268942] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xecea80) on tqpair(0xe6e760): expected_datao=0, payload_size=512 00:23:47.167 [2024-10-01 15:57:57.268946] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.268951] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.268954] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.268959] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.167 [2024-10-01 15:57:57.268964] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.167 [2024-10-01 15:57:57.268967] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.268970] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe6e760): datao=0, datal=512, cccid=6 00:23:47.167 [2024-10-01 15:57:57.268974] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xeced80) on tqpair(0xe6e760): expected_datao=0, payload_size=512 00:23:47.167 [2024-10-01 15:57:57.268980] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.268985] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.268988] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.268993] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.167 [2024-10-01 15:57:57.268998] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.167 [2024-10-01 15:57:57.269001] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.269004] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe6e760): datao=0, datal=4096, cccid=7 00:23:47.167 [2024-10-01 15:57:57.269007] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xecef00) on tqpair(0xe6e760): expected_datao=0, payload_size=4096 00:23:47.167 [2024-10-01 15:57:57.269011] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.269017] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.269020] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.269030] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.167 [2024-10-01 15:57:57.269036] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.167 [2024-10-01 15:57:57.269039] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.269042] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xecec00) on tqpair=0xe6e760 00:23:47.167 [2024-10-01 15:57:57.269053] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.167 [2024-10-01 15:57:57.269058] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.167 [2024-10-01 15:57:57.269061] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.269064] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xecea80) on tqpair=0xe6e760 00:23:47.167 [2024-10-01 15:57:57.269072] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.167 [2024-10-01 15:57:57.269077] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.167 [2024-10-01 15:57:57.269080] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.269083] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xeced80) on tqpair=0xe6e760 00:23:47.167 [2024-10-01 15:57:57.269089] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.167 [2024-10-01 15:57:57.269094] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.167 [2024-10-01 15:57:57.269097] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.167 [2024-10-01 15:57:57.269100] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xecef00) on tqpair=0xe6e760 00:23:47.167 ===================================================== 00:23:47.167 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:47.167 ===================================================== 00:23:47.167 Controller Capabilities/Features 00:23:47.167 ================================ 00:23:47.167 Vendor ID: 8086 00:23:47.167 Subsystem Vendor ID: 8086 00:23:47.167 Serial Number: SPDK00000000000001 00:23:47.167 Model Number: SPDK bdev Controller 00:23:47.167 Firmware Version: 25.01 00:23:47.167 Recommended Arb Burst: 6 00:23:47.167 IEEE OUI Identifier: e4 d2 5c 00:23:47.167 Multi-path I/O 00:23:47.167 May have multiple subsystem ports: Yes 00:23:47.167 May have multiple controllers: Yes 00:23:47.167 Associated with SR-IOV VF: No 00:23:47.167 Max Data Transfer Size: 131072 00:23:47.167 Max Number of Namespaces: 32 00:23:47.167 Max Number of I/O Queues: 127 00:23:47.167 NVMe Specification Version (VS): 1.3 00:23:47.167 NVMe Specification Version (Identify): 1.3 00:23:47.167 Maximum Queue Entries: 128 00:23:47.167 Contiguous Queues Required: Yes 00:23:47.167 Arbitration Mechanisms Supported 00:23:47.167 Weighted Round Robin: Not Supported 00:23:47.167 Vendor Specific: Not Supported 00:23:47.167 Reset Timeout: 15000 ms 00:23:47.167 Doorbell Stride: 4 bytes 00:23:47.167 NVM Subsystem Reset: Not Supported 00:23:47.167 Command Sets Supported 00:23:47.167 NVM Command Set: Supported 00:23:47.167 Boot Partition: Not Supported 00:23:47.167 Memory Page Size Minimum: 4096 bytes 00:23:47.167 Memory Page Size Maximum: 4096 bytes 00:23:47.167 Persistent Memory Region: Not Supported 00:23:47.167 Optional Asynchronous Events Supported 00:23:47.167 Namespace Attribute Notices: Supported 00:23:47.167 Firmware Activation Notices: Not Supported 00:23:47.167 ANA Change Notices: Not Supported 00:23:47.167 PLE Aggregate Log Change Notices: Not Supported 00:23:47.167 LBA Status Info Alert Notices: Not Supported 00:23:47.167 EGE Aggregate Log Change Notices: Not Supported 00:23:47.167 Normal NVM Subsystem Shutdown event: Not Supported 00:23:47.167 Zone Descriptor Change Notices: Not Supported 00:23:47.167 Discovery Log Change Notices: Not Supported 00:23:47.167 Controller Attributes 00:23:47.167 128-bit Host Identifier: Supported 00:23:47.167 Non-Operational Permissive Mode: Not Supported 00:23:47.167 NVM Sets: Not Supported 00:23:47.167 Read Recovery Levels: Not Supported 00:23:47.167 Endurance Groups: Not Supported 00:23:47.167 Predictable Latency Mode: Not Supported 00:23:47.167 Traffic Based Keep ALive: Not Supported 00:23:47.167 Namespace Granularity: Not Supported 00:23:47.167 SQ Associations: Not Supported 00:23:47.167 UUID List: Not Supported 00:23:47.167 Multi-Domain Subsystem: Not Supported 00:23:47.167 Fixed Capacity Management: Not Supported 00:23:47.167 Variable Capacity Management: Not Supported 00:23:47.167 Delete Endurance Group: Not Supported 00:23:47.167 Delete NVM Set: Not Supported 00:23:47.167 Extended LBA Formats Supported: Not Supported 00:23:47.167 Flexible Data Placement Supported: Not Supported 00:23:47.167 00:23:47.167 Controller Memory Buffer Support 00:23:47.167 ================================ 00:23:47.167 Supported: No 00:23:47.167 00:23:47.167 Persistent Memory Region Support 00:23:47.167 ================================ 00:23:47.167 Supported: No 00:23:47.167 00:23:47.167 Admin Command Set Attributes 00:23:47.167 ============================ 00:23:47.167 Security Send/Receive: Not Supported 00:23:47.167 Format NVM: Not Supported 00:23:47.167 Firmware Activate/Download: Not Supported 00:23:47.167 Namespace Management: Not Supported 00:23:47.167 Device Self-Test: Not Supported 00:23:47.167 Directives: Not Supported 00:23:47.167 NVMe-MI: Not Supported 00:23:47.167 Virtualization Management: Not Supported 00:23:47.167 Doorbell Buffer Config: Not Supported 00:23:47.167 Get LBA Status Capability: Not Supported 00:23:47.167 Command & Feature Lockdown Capability: Not Supported 00:23:47.167 Abort Command Limit: 4 00:23:47.167 Async Event Request Limit: 4 00:23:47.167 Number of Firmware Slots: N/A 00:23:47.167 Firmware Slot 1 Read-Only: N/A 00:23:47.167 Firmware Activation Without Reset: N/A 00:23:47.167 Multiple Update Detection Support: N/A 00:23:47.167 Firmware Update Granularity: No Information Provided 00:23:47.167 Per-Namespace SMART Log: No 00:23:47.167 Asymmetric Namespace Access Log Page: Not Supported 00:23:47.167 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:47.167 Command Effects Log Page: Supported 00:23:47.167 Get Log Page Extended Data: Supported 00:23:47.167 Telemetry Log Pages: Not Supported 00:23:47.167 Persistent Event Log Pages: Not Supported 00:23:47.167 Supported Log Pages Log Page: May Support 00:23:47.167 Commands Supported & Effects Log Page: Not Supported 00:23:47.167 Feature Identifiers & Effects Log Page:May Support 00:23:47.167 NVMe-MI Commands & Effects Log Page: May Support 00:23:47.167 Data Area 4 for Telemetry Log: Not Supported 00:23:47.167 Error Log Page Entries Supported: 128 00:23:47.168 Keep Alive: Supported 00:23:47.168 Keep Alive Granularity: 10000 ms 00:23:47.168 00:23:47.168 NVM Command Set Attributes 00:23:47.168 ========================== 00:23:47.168 Submission Queue Entry Size 00:23:47.168 Max: 64 00:23:47.168 Min: 64 00:23:47.168 Completion Queue Entry Size 00:23:47.168 Max: 16 00:23:47.168 Min: 16 00:23:47.168 Number of Namespaces: 32 00:23:47.168 Compare Command: Supported 00:23:47.168 Write Uncorrectable Command: Not Supported 00:23:47.168 Dataset Management Command: Supported 00:23:47.168 Write Zeroes Command: Supported 00:23:47.168 Set Features Save Field: Not Supported 00:23:47.168 Reservations: Supported 00:23:47.168 Timestamp: Not Supported 00:23:47.168 Copy: Supported 00:23:47.168 Volatile Write Cache: Present 00:23:47.168 Atomic Write Unit (Normal): 1 00:23:47.168 Atomic Write Unit (PFail): 1 00:23:47.168 Atomic Compare & Write Unit: 1 00:23:47.168 Fused Compare & Write: Supported 00:23:47.168 Scatter-Gather List 00:23:47.168 SGL Command Set: Supported 00:23:47.168 SGL Keyed: Supported 00:23:47.168 SGL Bit Bucket Descriptor: Not Supported 00:23:47.168 SGL Metadata Pointer: Not Supported 00:23:47.168 Oversized SGL: Not Supported 00:23:47.168 SGL Metadata Address: Not Supported 00:23:47.168 SGL Offset: Supported 00:23:47.168 Transport SGL Data Block: Not Supported 00:23:47.168 Replay Protected Memory Block: Not Supported 00:23:47.168 00:23:47.168 Firmware Slot Information 00:23:47.168 ========================= 00:23:47.168 Active slot: 1 00:23:47.168 Slot 1 Firmware Revision: 25.01 00:23:47.168 00:23:47.168 00:23:47.168 Commands Supported and Effects 00:23:47.168 ============================== 00:23:47.168 Admin Commands 00:23:47.168 -------------- 00:23:47.168 Get Log Page (02h): Supported 00:23:47.168 Identify (06h): Supported 00:23:47.168 Abort (08h): Supported 00:23:47.168 Set Features (09h): Supported 00:23:47.168 Get Features (0Ah): Supported 00:23:47.168 Asynchronous Event Request (0Ch): Supported 00:23:47.168 Keep Alive (18h): Supported 00:23:47.168 I/O Commands 00:23:47.168 ------------ 00:23:47.168 Flush (00h): Supported LBA-Change 00:23:47.168 Write (01h): Supported LBA-Change 00:23:47.168 Read (02h): Supported 00:23:47.168 Compare (05h): Supported 00:23:47.168 Write Zeroes (08h): Supported LBA-Change 00:23:47.168 Dataset Management (09h): Supported LBA-Change 00:23:47.168 Copy (19h): Supported LBA-Change 00:23:47.168 00:23:47.168 Error Log 00:23:47.168 ========= 00:23:47.168 00:23:47.168 Arbitration 00:23:47.168 =========== 00:23:47.168 Arbitration Burst: 1 00:23:47.168 00:23:47.168 Power Management 00:23:47.168 ================ 00:23:47.168 Number of Power States: 1 00:23:47.168 Current Power State: Power State #0 00:23:47.168 Power State #0: 00:23:47.168 Max Power: 0.00 W 00:23:47.168 Non-Operational State: Operational 00:23:47.168 Entry Latency: Not Reported 00:23:47.168 Exit Latency: Not Reported 00:23:47.168 Relative Read Throughput: 0 00:23:47.168 Relative Read Latency: 0 00:23:47.168 Relative Write Throughput: 0 00:23:47.168 Relative Write Latency: 0 00:23:47.168 Idle Power: Not Reported 00:23:47.168 Active Power: Not Reported 00:23:47.168 Non-Operational Permissive Mode: Not Supported 00:23:47.168 00:23:47.168 Health Information 00:23:47.168 ================== 00:23:47.168 Critical Warnings: 00:23:47.168 Available Spare Space: OK 00:23:47.168 Temperature: OK 00:23:47.168 Device Reliability: OK 00:23:47.168 Read Only: No 00:23:47.168 Volatile Memory Backup: OK 00:23:47.168 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:47.168 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:47.168 Available Spare: 0% 00:23:47.168 Available Spare Threshold: 0% 00:23:47.168 Life Percentage Used:[2024-10-01 15:57:57.269182] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.168 [2024-10-01 15:57:57.269187] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe6e760) 00:23:47.168 [2024-10-01 15:57:57.269194] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-10-01 15:57:57.269206] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xecef00, cid 7, qid 0 00:23:47.168 [2024-10-01 15:57:57.269282] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.168 [2024-10-01 15:57:57.269288] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.168 [2024-10-01 15:57:57.269291] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.168 [2024-10-01 15:57:57.269294] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xecef00) on tqpair=0xe6e760 00:23:47.168 [2024-10-01 15:57:57.269321] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:47.168 [2024-10-01 15:57:57.269329] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece480) on tqpair=0xe6e760 00:23:47.168 [2024-10-01 15:57:57.269335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-10-01 15:57:57.269341] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece600) on tqpair=0xe6e760 00:23:47.168 [2024-10-01 15:57:57.269346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-10-01 15:57:57.269350] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece780) on tqpair=0xe6e760 00:23:47.168 [2024-10-01 15:57:57.269354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-10-01 15:57:57.269358] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.168 [2024-10-01 15:57:57.269362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-10-01 15:57:57.269369] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.168 [2024-10-01 15:57:57.269373] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.168 [2024-10-01 15:57:57.269376] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.168 [2024-10-01 15:57:57.269382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-10-01 15:57:57.269393] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.168 [2024-10-01 15:57:57.269456] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.168 [2024-10-01 15:57:57.269462] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.168 [2024-10-01 15:57:57.269465] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.168 [2024-10-01 15:57:57.269468] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.168 [2024-10-01 15:57:57.269474] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.168 [2024-10-01 15:57:57.269477] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.168 [2024-10-01 15:57:57.269480] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.168 [2024-10-01 15:57:57.269486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-10-01 15:57:57.269498] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.168 [2024-10-01 15:57:57.269569] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.168 [2024-10-01 15:57:57.269574] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.168 [2024-10-01 15:57:57.269577] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.168 [2024-10-01 15:57:57.269581] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.168 [2024-10-01 15:57:57.269584] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:47.168 [2024-10-01 15:57:57.269589] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:47.168 [2024-10-01 15:57:57.269597] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.168 [2024-10-01 15:57:57.269600] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.168 [2024-10-01 15:57:57.269603] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.168 [2024-10-01 15:57:57.269609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-10-01 15:57:57.269618] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.168 [2024-10-01 15:57:57.269679] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.169 [2024-10-01 15:57:57.269685] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.169 [2024-10-01 15:57:57.269688] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.269691] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.169 [2024-10-01 15:57:57.269701] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.269705] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.269708] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.169 [2024-10-01 15:57:57.269714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-10-01 15:57:57.269723] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.169 [2024-10-01 15:57:57.269787] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.169 [2024-10-01 15:57:57.269793] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.169 [2024-10-01 15:57:57.269796] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.269799] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.169 [2024-10-01 15:57:57.269808] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.269812] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.269815] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.169 [2024-10-01 15:57:57.269820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-10-01 15:57:57.269830] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.169 [2024-10-01 15:57:57.269897] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.169 [2024-10-01 15:57:57.269903] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.169 [2024-10-01 15:57:57.269906] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.269910] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.169 [2024-10-01 15:57:57.269918] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.269921] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.269924] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.169 [2024-10-01 15:57:57.269930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-10-01 15:57:57.269939] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.169 [2024-10-01 15:57:57.269998] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.169 [2024-10-01 15:57:57.270004] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.169 [2024-10-01 15:57:57.270007] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270010] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.169 [2024-10-01 15:57:57.270018] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270021] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270025] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.169 [2024-10-01 15:57:57.270030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-10-01 15:57:57.270039] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.169 [2024-10-01 15:57:57.270097] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.169 [2024-10-01 15:57:57.270103] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.169 [2024-10-01 15:57:57.270106] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270109] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.169 [2024-10-01 15:57:57.270117] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270121] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270126] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.169 [2024-10-01 15:57:57.270131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-10-01 15:57:57.270140] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.169 [2024-10-01 15:57:57.270218] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.169 [2024-10-01 15:57:57.270223] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.169 [2024-10-01 15:57:57.270226] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270230] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.169 [2024-10-01 15:57:57.270238] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270242] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270245] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.169 [2024-10-01 15:57:57.270251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-10-01 15:57:57.270260] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.169 [2024-10-01 15:57:57.270322] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.169 [2024-10-01 15:57:57.270328] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.169 [2024-10-01 15:57:57.270331] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270334] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.169 [2024-10-01 15:57:57.270342] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270346] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270349] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.169 [2024-10-01 15:57:57.270354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-10-01 15:57:57.270363] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.169 [2024-10-01 15:57:57.270421] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.169 [2024-10-01 15:57:57.270427] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.169 [2024-10-01 15:57:57.270430] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270433] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.169 [2024-10-01 15:57:57.270441] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270445] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270448] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.169 [2024-10-01 15:57:57.270453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-10-01 15:57:57.270462] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.169 [2024-10-01 15:57:57.270521] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.169 [2024-10-01 15:57:57.270526] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.169 [2024-10-01 15:57:57.270529] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270533] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.169 [2024-10-01 15:57:57.270541] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270544] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270547] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.169 [2024-10-01 15:57:57.270554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-10-01 15:57:57.270563] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.169 [2024-10-01 15:57:57.270624] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.169 [2024-10-01 15:57:57.270629] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.169 [2024-10-01 15:57:57.270633] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270636] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.169 [2024-10-01 15:57:57.270644] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270647] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.169 [2024-10-01 15:57:57.270650] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.170 [2024-10-01 15:57:57.270656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.170 [2024-10-01 15:57:57.270665] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.170 [2024-10-01 15:57:57.270734] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.170 [2024-10-01 15:57:57.270739] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.170 [2024-10-01 15:57:57.270742] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.270746] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.170 [2024-10-01 15:57:57.270755] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.270758] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.270761] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.170 [2024-10-01 15:57:57.270767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.170 [2024-10-01 15:57:57.270777] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.170 [2024-10-01 15:57:57.270837] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.170 [2024-10-01 15:57:57.270843] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.170 [2024-10-01 15:57:57.270846] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.270850] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.170 [2024-10-01 15:57:57.270858] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.270861] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.270869] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.170 [2024-10-01 15:57:57.270874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.170 [2024-10-01 15:57:57.270884] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.170 [2024-10-01 15:57:57.270949] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.170 [2024-10-01 15:57:57.270954] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.170 [2024-10-01 15:57:57.270957] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.270961] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.170 [2024-10-01 15:57:57.270968] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.270972] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.270975] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.170 [2024-10-01 15:57:57.270981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.170 [2024-10-01 15:57:57.270991] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.170 [2024-10-01 15:57:57.271059] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.170 [2024-10-01 15:57:57.271064] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.170 [2024-10-01 15:57:57.271067] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271070] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.170 [2024-10-01 15:57:57.271078] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271082] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271085] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.170 [2024-10-01 15:57:57.271091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.170 [2024-10-01 15:57:57.271100] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.170 [2024-10-01 15:57:57.271166] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.170 [2024-10-01 15:57:57.271171] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.170 [2024-10-01 15:57:57.271174] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271178] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.170 [2024-10-01 15:57:57.271186] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271190] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271193] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.170 [2024-10-01 15:57:57.271199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.170 [2024-10-01 15:57:57.271207] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.170 [2024-10-01 15:57:57.271265] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.170 [2024-10-01 15:57:57.271271] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.170 [2024-10-01 15:57:57.271274] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271277] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.170 [2024-10-01 15:57:57.271285] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271289] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271292] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.170 [2024-10-01 15:57:57.271297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.170 [2024-10-01 15:57:57.271307] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.170 [2024-10-01 15:57:57.271370] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.170 [2024-10-01 15:57:57.271376] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.170 [2024-10-01 15:57:57.271379] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271382] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.170 [2024-10-01 15:57:57.271391] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271394] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271397] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.170 [2024-10-01 15:57:57.271403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.170 [2024-10-01 15:57:57.271413] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.170 [2024-10-01 15:57:57.271473] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.170 [2024-10-01 15:57:57.271479] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.170 [2024-10-01 15:57:57.271482] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271485] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.170 [2024-10-01 15:57:57.271493] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271496] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271499] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.170 [2024-10-01 15:57:57.271505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.170 [2024-10-01 15:57:57.271514] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.170 [2024-10-01 15:57:57.271584] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.170 [2024-10-01 15:57:57.271590] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.170 [2024-10-01 15:57:57.271593] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271596] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.170 [2024-10-01 15:57:57.271604] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271607] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271610] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.170 [2024-10-01 15:57:57.271616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.170 [2024-10-01 15:57:57.271625] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.170 [2024-10-01 15:57:57.271690] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.170 [2024-10-01 15:57:57.271696] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.170 [2024-10-01 15:57:57.271699] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271702] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.170 [2024-10-01 15:57:57.271710] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271713] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271716] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.170 [2024-10-01 15:57:57.271722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.170 [2024-10-01 15:57:57.271731] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.170 [2024-10-01 15:57:57.271789] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.170 [2024-10-01 15:57:57.271794] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.170 [2024-10-01 15:57:57.271797] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271800] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.170 [2024-10-01 15:57:57.271808] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271812] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271815] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.170 [2024-10-01 15:57:57.271820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.170 [2024-10-01 15:57:57.271830] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.170 [2024-10-01 15:57:57.271899] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.170 [2024-10-01 15:57:57.271905] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.170 [2024-10-01 15:57:57.271907] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271911] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.170 [2024-10-01 15:57:57.271919] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.170 [2024-10-01 15:57:57.271922] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.271925] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.171 [2024-10-01 15:57:57.271931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.171 [2024-10-01 15:57:57.271940] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.171 [2024-10-01 15:57:57.272008] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.171 [2024-10-01 15:57:57.272014] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.171 [2024-10-01 15:57:57.272017] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272020] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.171 [2024-10-01 15:57:57.272028] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272032] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272035] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.171 [2024-10-01 15:57:57.272040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.171 [2024-10-01 15:57:57.272049] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.171 [2024-10-01 15:57:57.272118] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.171 [2024-10-01 15:57:57.272124] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.171 [2024-10-01 15:57:57.272126] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272130] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.171 [2024-10-01 15:57:57.272138] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272141] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272144] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.171 [2024-10-01 15:57:57.272150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.171 [2024-10-01 15:57:57.272159] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.171 [2024-10-01 15:57:57.272221] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.171 [2024-10-01 15:57:57.272227] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.171 [2024-10-01 15:57:57.272230] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272233] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.171 [2024-10-01 15:57:57.272241] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272245] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272248] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.171 [2024-10-01 15:57:57.272253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.171 [2024-10-01 15:57:57.272262] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.171 [2024-10-01 15:57:57.272321] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.171 [2024-10-01 15:57:57.272326] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.171 [2024-10-01 15:57:57.272332] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272336] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.171 [2024-10-01 15:57:57.272344] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272347] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272350] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.171 [2024-10-01 15:57:57.272356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.171 [2024-10-01 15:57:57.272365] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.171 [2024-10-01 15:57:57.272421] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.171 [2024-10-01 15:57:57.272427] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.171 [2024-10-01 15:57:57.272430] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272433] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.171 [2024-10-01 15:57:57.272441] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272444] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272448] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.171 [2024-10-01 15:57:57.272453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.171 [2024-10-01 15:57:57.272462] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.171 [2024-10-01 15:57:57.272531] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.171 [2024-10-01 15:57:57.272536] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.171 [2024-10-01 15:57:57.272539] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272543] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.171 [2024-10-01 15:57:57.272552] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272555] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272558] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.171 [2024-10-01 15:57:57.272564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.171 [2024-10-01 15:57:57.272573] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.171 [2024-10-01 15:57:57.272634] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.171 [2024-10-01 15:57:57.272639] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.171 [2024-10-01 15:57:57.272642] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272646] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.171 [2024-10-01 15:57:57.272654] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272658] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272661] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.171 [2024-10-01 15:57:57.272667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.171 [2024-10-01 15:57:57.272675] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.171 [2024-10-01 15:57:57.272738] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.171 [2024-10-01 15:57:57.272744] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.171 [2024-10-01 15:57:57.272746] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272751] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.171 [2024-10-01 15:57:57.272759] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272763] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272766] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.171 [2024-10-01 15:57:57.272771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.171 [2024-10-01 15:57:57.272781] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.171 [2024-10-01 15:57:57.272848] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.171 [2024-10-01 15:57:57.272854] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.171 [2024-10-01 15:57:57.272857] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.272860] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.171 [2024-10-01 15:57:57.276876] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.276880] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.276883] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe6e760) 00:23:47.171 [2024-10-01 15:57:57.276889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.171 [2024-10-01 15:57:57.276900] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xece900, cid 3, qid 0 00:23:47.171 [2024-10-01 15:57:57.276974] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.171 [2024-10-01 15:57:57.276980] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.171 [2024-10-01 15:57:57.276983] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.171 [2024-10-01 15:57:57.276986] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xece900) on tqpair=0xe6e760 00:23:47.171 [2024-10-01 15:57:57.276993] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:23:47.171 0% 00:23:47.171 Data Units Read: 0 00:23:47.171 Data Units Written: 0 00:23:47.171 Host Read Commands: 0 00:23:47.171 Host Write Commands: 0 00:23:47.171 Controller Busy Time: 0 minutes 00:23:47.171 Power Cycles: 0 00:23:47.171 Power On Hours: 0 hours 00:23:47.171 Unsafe Shutdowns: 0 00:23:47.171 Unrecoverable Media Errors: 0 00:23:47.171 Lifetime Error Log Entries: 0 00:23:47.171 Warning Temperature Time: 0 minutes 00:23:47.171 Critical Temperature Time: 0 minutes 00:23:47.171 00:23:47.171 Number of Queues 00:23:47.171 ================ 00:23:47.171 Number of I/O Submission Queues: 127 00:23:47.171 Number of I/O Completion Queues: 127 00:23:47.171 00:23:47.171 Active Namespaces 00:23:47.171 ================= 00:23:47.171 Namespace ID:1 00:23:47.171 Error Recovery Timeout: Unlimited 00:23:47.171 Command Set Identifier: NVM (00h) 00:23:47.171 Deallocate: Supported 00:23:47.171 Deallocated/Unwritten Error: Not Supported 00:23:47.171 Deallocated Read Value: Unknown 00:23:47.171 Deallocate in Write Zeroes: Not Supported 00:23:47.171 Deallocated Guard Field: 0xFFFF 00:23:47.171 Flush: Supported 00:23:47.171 Reservation: Supported 00:23:47.171 Namespace Sharing Capabilities: Multiple Controllers 00:23:47.171 Size (in LBAs): 131072 (0GiB) 00:23:47.171 Capacity (in LBAs): 131072 (0GiB) 00:23:47.171 Utilization (in LBAs): 131072 (0GiB) 00:23:47.171 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:47.171 EUI64: ABCDEF0123456789 00:23:47.171 UUID: 78c6d33a-639c-42ad-afb3-be60725f37f2 00:23:47.172 Thin Provisioning: Not Supported 00:23:47.172 Per-NS Atomic Units: Yes 00:23:47.172 Atomic Boundary Size (Normal): 0 00:23:47.172 Atomic Boundary Size (PFail): 0 00:23:47.172 Atomic Boundary Offset: 0 00:23:47.172 Maximum Single Source Range Length: 65535 00:23:47.172 Maximum Copy Length: 65535 00:23:47.172 Maximum Source Range Count: 1 00:23:47.172 NGUID/EUI64 Never Reused: No 00:23:47.172 Namespace Write Protected: No 00:23:47.172 Number of LBA Formats: 1 00:23:47.172 Current LBA Format: LBA Format #00 00:23:47.172 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:47.172 00:23:47.172 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:47.172 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:47.172 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.172 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.172 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.172 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:47.172 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:47.172 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:47.172 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:47.172 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:47.172 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:47.172 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:47.172 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:47.172 rmmod nvme_tcp 00:23:47.172 rmmod nvme_fabrics 00:23:47.172 rmmod nvme_keyring 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 2516897 ']' 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 2516897 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2516897 ']' 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2516897 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2516897 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2516897' 00:23:47.430 killing process with pid 2516897 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2516897 00:23:47.430 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2516897 00:23:47.689 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:47.689 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:47.689 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:47.689 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:47.689 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:23:47.689 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:47.689 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:23:47.689 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:47.689 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:47.689 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.689 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.689 15:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.592 15:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:49.592 00:23:49.592 real 0m9.914s 00:23:49.592 user 0m7.848s 00:23:49.592 sys 0m4.812s 00:23:49.592 15:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:49.592 15:57:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.592 ************************************ 00:23:49.592 END TEST nvmf_identify 00:23:49.592 ************************************ 00:23:49.592 15:57:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:49.592 15:57:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:49.592 15:57:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:49.592 15:57:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.592 ************************************ 00:23:49.592 START TEST nvmf_perf 00:23:49.592 ************************************ 00:23:49.592 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:49.851 * Looking for test storage... 00:23:49.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:49.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.851 --rc genhtml_branch_coverage=1 00:23:49.851 --rc genhtml_function_coverage=1 00:23:49.851 --rc genhtml_legend=1 00:23:49.851 --rc geninfo_all_blocks=1 00:23:49.851 --rc geninfo_unexecuted_blocks=1 00:23:49.851 00:23:49.851 ' 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:49.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.851 --rc genhtml_branch_coverage=1 00:23:49.851 --rc genhtml_function_coverage=1 00:23:49.851 --rc genhtml_legend=1 00:23:49.851 --rc geninfo_all_blocks=1 00:23:49.851 --rc geninfo_unexecuted_blocks=1 00:23:49.851 00:23:49.851 ' 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:49.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.851 --rc genhtml_branch_coverage=1 00:23:49.851 --rc genhtml_function_coverage=1 00:23:49.851 --rc genhtml_legend=1 00:23:49.851 --rc geninfo_all_blocks=1 00:23:49.851 --rc geninfo_unexecuted_blocks=1 00:23:49.851 00:23:49.851 ' 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:49.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.851 --rc genhtml_branch_coverage=1 00:23:49.851 --rc genhtml_function_coverage=1 00:23:49.851 --rc genhtml_legend=1 00:23:49.851 --rc geninfo_all_blocks=1 00:23:49.851 --rc geninfo_unexecuted_blocks=1 00:23:49.851 00:23:49.851 ' 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.851 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:49.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:49.852 15:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:56.418 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.418 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:56.418 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:56.419 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:56.419 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:56.419 Found net devices under 0000:86:00.0: cvl_0_0 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:56.419 Found net devices under 0000:86:00.1: cvl_0_1 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:56.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:23:56.419 00:23:56.419 --- 10.0.0.2 ping statistics --- 00:23:56.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.419 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:23:56.419 00:23:56.419 --- 10.0.0.1 ping statistics --- 00:23:56.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.419 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=2520672 00:23:56.419 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:56.420 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 2520672 00:23:56.420 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2520672 ']' 00:23:56.420 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.420 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:56.420 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.420 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:56.420 15:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:56.420 [2024-10-01 15:58:05.985534] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:23:56.420 [2024-10-01 15:58:05.985578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.420 [2024-10-01 15:58:06.054649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:56.420 [2024-10-01 15:58:06.137995] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.420 [2024-10-01 15:58:06.138029] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.420 [2024-10-01 15:58:06.138036] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.420 [2024-10-01 15:58:06.138043] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.420 [2024-10-01 15:58:06.138048] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.420 [2024-10-01 15:58:06.138105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.420 [2024-10-01 15:58:06.138212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.420 [2024-10-01 15:58:06.138324] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.420 [2024-10-01 15:58:06.138325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:56.678 15:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.678 15:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:56.678 15:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:56.678 15:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:56.678 15:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:56.678 15:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.678 15:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:56.678 15:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:59.964 15:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:59.964 15:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:59.964 15:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:59.964 15:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:00.223 15:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:00.223 15:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:24:00.223 15:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:00.223 15:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:00.223 15:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:00.482 [2024-10-01 15:58:10.467882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.482 15:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:00.740 15:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:00.740 15:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:00.740 15:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:00.740 15:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:00.999 15:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.258 [2024-10-01 15:58:11.278878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.258 15:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:01.516 15:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:24:01.516 15:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:01.516 15:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:01.516 15:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:02.893 Initializing NVMe Controllers 00:24:02.894 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:24:02.894 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:24:02.894 Initialization complete. Launching workers. 00:24:02.894 ======================================================== 00:24:02.894 Latency(us) 00:24:02.894 Device Information : IOPS MiB/s Average min max 00:24:02.894 PCIE (0000:5e:00.0) NSID 1 from core 0: 97988.90 382.77 326.07 15.11 4658.20 00:24:02.894 ======================================================== 00:24:02.894 Total : 97988.90 382.77 326.07 15.11 4658.20 00:24:02.894 00:24:02.894 15:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:04.271 Initializing NVMe Controllers 00:24:04.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:04.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:04.271 Initialization complete. Launching workers. 00:24:04.271 ======================================================== 00:24:04.271 Latency(us) 00:24:04.271 Device Information : IOPS MiB/s Average min max 00:24:04.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 120.00 0.47 8523.79 106.73 44835.26 00:24:04.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.00 0.21 18848.53 7963.25 47888.32 00:24:04.271 ======================================================== 00:24:04.271 Total : 175.00 0.68 11768.71 106.73 47888.32 00:24:04.271 00:24:04.271 15:58:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:05.207 Initializing NVMe Controllers 00:24:05.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:05.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:05.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:05.207 Initialization complete. Launching workers. 00:24:05.207 ======================================================== 00:24:05.207 Latency(us) 00:24:05.207 Device Information : IOPS MiB/s Average min max 00:24:05.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11320.95 44.22 2839.84 504.68 44515.65 00:24:05.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3793.98 14.82 8467.93 5418.93 18198.23 00:24:05.207 ======================================================== 00:24:05.207 Total : 15114.94 59.04 4252.54 504.68 44515.65 00:24:05.207 00:24:05.207 15:58:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:05.207 15:58:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:05.207 15:58:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:08.493 Initializing NVMe Controllers 00:24:08.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:08.493 Controller IO queue size 128, less than required. 00:24:08.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:08.493 Controller IO queue size 128, less than required. 00:24:08.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:08.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:08.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:08.493 Initialization complete. Launching workers. 00:24:08.493 ======================================================== 00:24:08.493 Latency(us) 00:24:08.493 Device Information : IOPS MiB/s Average min max 00:24:08.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1778.99 444.75 72940.15 48883.57 126767.28 00:24:08.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 602.50 150.62 217498.58 81208.17 353121.98 00:24:08.493 ======================================================== 00:24:08.493 Total : 2381.48 595.37 109512.25 48883.57 353121.98 00:24:08.493 00:24:08.493 15:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:08.493 No valid NVMe controllers or AIO or URING devices found 00:24:08.493 Initializing NVMe Controllers 00:24:08.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:08.493 Controller IO queue size 128, less than required. 00:24:08.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:08.493 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:08.493 Controller IO queue size 128, less than required. 00:24:08.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:08.493 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:08.493 WARNING: Some requested NVMe devices were skipped 00:24:08.493 15:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:11.022 Initializing NVMe Controllers 00:24:11.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:11.022 Controller IO queue size 128, less than required. 00:24:11.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:11.022 Controller IO queue size 128, less than required. 00:24:11.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:11.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:11.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:11.022 Initialization complete. Launching workers. 00:24:11.022 00:24:11.022 ==================== 00:24:11.022 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:11.022 TCP transport: 00:24:11.022 polls: 12516 00:24:11.022 idle_polls: 8510 00:24:11.022 sock_completions: 4006 00:24:11.022 nvme_completions: 6125 00:24:11.022 submitted_requests: 9214 00:24:11.022 queued_requests: 1 00:24:11.022 00:24:11.022 ==================== 00:24:11.022 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:11.022 TCP transport: 00:24:11.022 polls: 12186 00:24:11.022 idle_polls: 7672 00:24:11.022 sock_completions: 4514 00:24:11.022 nvme_completions: 6543 00:24:11.022 submitted_requests: 9780 00:24:11.022 queued_requests: 1 00:24:11.022 ======================================================== 00:24:11.022 Latency(us) 00:24:11.022 Device Information : IOPS MiB/s Average min max 00:24:11.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1530.99 382.75 86615.93 40718.32 142902.61 00:24:11.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1635.49 408.87 78529.66 45425.79 110154.50 00:24:11.022 ======================================================== 00:24:11.022 Total : 3166.47 791.62 82439.36 40718.32 142902.61 00:24:11.022 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:11.022 rmmod nvme_tcp 00:24:11.022 rmmod nvme_fabrics 00:24:11.022 rmmod nvme_keyring 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 2520672 ']' 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 2520672 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2520672 ']' 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2520672 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:11.022 15:58:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2520672 00:24:11.023 15:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:11.023 15:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:11.023 15:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2520672' 00:24:11.023 killing process with pid 2520672 00:24:11.023 15:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2520672 00:24:11.023 15:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2520672 00:24:12.926 15:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:12.926 15:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:12.926 15:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:12.926 15:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:12.926 15:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:24:12.926 15:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:12.926 15:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:24:13.185 15:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:13.185 15:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:13.185 15:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.185 15:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.185 15:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.089 15:58:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:15.089 00:24:15.089 real 0m25.414s 00:24:15.089 user 1m7.519s 00:24:15.089 sys 0m8.359s 00:24:15.089 15:58:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:15.089 15:58:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:15.089 ************************************ 00:24:15.089 END TEST nvmf_perf 00:24:15.089 ************************************ 00:24:15.089 15:58:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:15.089 15:58:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:15.089 15:58:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:15.089 15:58:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.089 ************************************ 00:24:15.089 START TEST nvmf_fio_host 00:24:15.089 ************************************ 00:24:15.089 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:15.349 * Looking for test storage... 00:24:15.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:15.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.349 --rc genhtml_branch_coverage=1 00:24:15.349 --rc genhtml_function_coverage=1 00:24:15.349 --rc genhtml_legend=1 00:24:15.349 --rc geninfo_all_blocks=1 00:24:15.349 --rc geninfo_unexecuted_blocks=1 00:24:15.349 00:24:15.349 ' 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:15.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.349 --rc genhtml_branch_coverage=1 00:24:15.349 --rc genhtml_function_coverage=1 00:24:15.349 --rc genhtml_legend=1 00:24:15.349 --rc geninfo_all_blocks=1 00:24:15.349 --rc geninfo_unexecuted_blocks=1 00:24:15.349 00:24:15.349 ' 00:24:15.349 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:15.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.349 --rc genhtml_branch_coverage=1 00:24:15.349 --rc genhtml_function_coverage=1 00:24:15.350 --rc genhtml_legend=1 00:24:15.350 --rc geninfo_all_blocks=1 00:24:15.350 --rc geninfo_unexecuted_blocks=1 00:24:15.350 00:24:15.350 ' 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.350 --rc genhtml_branch_coverage=1 00:24:15.350 --rc genhtml_function_coverage=1 00:24:15.350 --rc genhtml_legend=1 00:24:15.350 --rc geninfo_all_blocks=1 00:24:15.350 --rc geninfo_unexecuted_blocks=1 00:24:15.350 00:24:15.350 ' 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:15.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:15.350 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:15.351 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:15.351 15:58:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:21.920 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:21.920 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:21.920 Found net devices under 0000:86:00.0: cvl_0_0 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:21.920 Found net devices under 0000:86:00.1: cvl_0_1 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:21.920 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:21.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:24:21.921 00:24:21.921 --- 10.0.0.2 ping statistics --- 00:24:21.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.921 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:24:21.921 00:24:21.921 --- 10.0.0.1 ping statistics --- 00:24:21.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.921 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2526996 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2526996 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2526996 ']' 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.921 15:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.921 [2024-10-01 15:58:31.465511] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:24:21.921 [2024-10-01 15:58:31.465562] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.921 [2024-10-01 15:58:31.537916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:21.921 [2024-10-01 15:58:31.612049] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.921 [2024-10-01 15:58:31.612089] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.921 [2024-10-01 15:58:31.612097] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.921 [2024-10-01 15:58:31.612103] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.921 [2024-10-01 15:58:31.612109] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.921 [2024-10-01 15:58:31.612167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.921 [2024-10-01 15:58:31.612200] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.921 [2024-10-01 15:58:31.612303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.921 [2024-10-01 15:58:31.612304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.178 15:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:22.178 15:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:22.178 15:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:22.436 [2024-10-01 15:58:32.480159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.436 15:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:22.436 15:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:22.436 15:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.436 15:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:22.694 Malloc1 00:24:22.694 15:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:22.953 15:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:22.953 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.211 [2024-10-01 15:58:33.310343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.211 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:23.469 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:23.470 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:23.470 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:23.470 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:23.470 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:23.470 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:23.470 15:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:23.728 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:23.728 fio-3.35 00:24:23.728 Starting 1 thread 00:24:26.411 00:24:26.411 test: (groupid=0, jobs=1): err= 0: pid=2527600: Tue Oct 1 15:58:36 2024 00:24:26.411 read: IOPS=11.9k, BW=46.7MiB/s (48.9MB/s)(93.6MiB/2005msec) 00:24:26.411 slat (nsec): min=1503, max=322935, avg=1805.05, stdev=2735.45 00:24:26.411 clat (usec): min=3453, max=10004, avg=5926.49, stdev=467.54 00:24:26.411 lat (usec): min=3455, max=10006, avg=5928.29, stdev=467.56 00:24:26.411 clat percentiles (usec): 00:24:26.411 | 1.00th=[ 4752], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:24:26.411 | 30.00th=[ 5735], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:24:26.411 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6652], 00:24:26.411 | 99.00th=[ 6980], 99.50th=[ 7177], 99.90th=[ 8455], 99.95th=[ 9241], 00:24:26.411 | 99.99th=[ 9896] 00:24:26.411 bw ( KiB/s): min=46856, max=48248, per=99.94%, avg=47756.00, stdev=621.93, samples=4 00:24:26.411 iops : min=11714, max=12062, avg=11939.00, stdev=155.48, samples=4 00:24:26.411 write: IOPS=11.9k, BW=46.4MiB/s (48.7MB/s)(93.1MiB/2005msec); 0 zone resets 00:24:26.411 slat (nsec): min=1545, max=252721, avg=1861.26, stdev=1843.25 00:24:26.411 clat (usec): min=2632, max=9121, avg=4789.69, stdev=392.18 00:24:26.411 lat (usec): min=2647, max=9122, avg=4791.55, stdev=392.29 00:24:26.411 clat percentiles (usec): 00:24:26.411 | 1.00th=[ 3851], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:24:26.411 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:24:26.411 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:24:26.411 | 99.00th=[ 5669], 99.50th=[ 6063], 99.90th=[ 7242], 99.95th=[ 8848], 00:24:26.411 | 99.99th=[ 9110] 00:24:26.411 bw ( KiB/s): min=47232, max=48064, per=100.00%, avg=47576.00, stdev=359.85, samples=4 00:24:26.411 iops : min=11808, max=12016, avg=11894.00, stdev=89.96, samples=4 00:24:26.411 lat (msec) : 4=1.03%, 10=98.97%, 20=0.01% 00:24:26.411 cpu : usr=73.60%, sys=24.95%, ctx=133, majf=0, minf=4 00:24:26.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:26.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:26.411 issued rwts: total=23953,23840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:26.411 00:24:26.411 Run status group 0 (all jobs): 00:24:26.411 READ: bw=46.7MiB/s (48.9MB/s), 46.7MiB/s-46.7MiB/s (48.9MB/s-48.9MB/s), io=93.6MiB (98.1MB), run=2005-2005msec 00:24:26.411 WRITE: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=93.1MiB (97.6MB), run=2005-2005msec 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:26.411 15:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:26.706 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:26.706 fio-3.35 00:24:26.706 Starting 1 thread 00:24:29.237 00:24:29.237 test: (groupid=0, jobs=1): err= 0: pid=2528177: Tue Oct 1 15:58:39 2024 00:24:29.237 read: IOPS=11.1k, BW=173MiB/s (181MB/s)(346MiB/2004msec) 00:24:29.237 slat (nsec): min=2486, max=86566, avg=2814.16, stdev=1201.96 00:24:29.237 clat (usec): min=1696, max=12569, avg=6625.32, stdev=1607.58 00:24:29.237 lat (usec): min=1698, max=12572, avg=6628.13, stdev=1607.67 00:24:29.237 clat percentiles (usec): 00:24:29.237 | 1.00th=[ 3425], 5.00th=[ 4178], 10.00th=[ 4555], 20.00th=[ 5211], 00:24:29.237 | 30.00th=[ 5669], 40.00th=[ 6128], 50.00th=[ 6587], 60.00th=[ 7046], 00:24:29.237 | 70.00th=[ 7439], 80.00th=[ 7963], 90.00th=[ 8586], 95.00th=[ 9503], 00:24:29.237 | 99.00th=[10683], 99.50th=[11338], 99.90th=[11994], 99.95th=[12125], 00:24:29.237 | 99.99th=[12387] 00:24:29.237 bw ( KiB/s): min=79872, max=94690, per=51.02%, avg=90248.50, stdev=6993.64, samples=4 00:24:29.237 iops : min= 4992, max= 5918, avg=5640.50, stdev=437.08, samples=4 00:24:29.237 write: IOPS=6423, BW=100MiB/s (105MB/s)(184MiB/1834msec); 0 zone resets 00:24:29.237 slat (usec): min=28, max=317, avg=31.59, stdev= 6.22 00:24:29.237 clat (usec): min=2707, max=13843, avg=8548.48, stdev=1457.90 00:24:29.237 lat (usec): min=2744, max=13873, avg=8580.06, stdev=1458.79 00:24:29.237 clat percentiles (usec): 00:24:29.237 | 1.00th=[ 5669], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7308], 00:24:29.237 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:24:29.237 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11207], 00:24:29.237 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13304], 99.95th=[13566], 00:24:29.237 | 99.99th=[13829] 00:24:29.237 bw ( KiB/s): min=84896, max=98459, per=91.17%, avg=93702.75, stdev=6019.52, samples=4 00:24:29.237 iops : min= 5306, max= 6153, avg=5856.25, stdev=376.04, samples=4 00:24:29.237 lat (msec) : 2=0.02%, 4=2.18%, 10=90.30%, 20=7.49% 00:24:29.237 cpu : usr=86.78%, sys=12.33%, ctx=86, majf=0, minf=4 00:24:29.237 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:29.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:29.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:29.237 issued rwts: total=22156,11781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:29.237 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:29.237 00:24:29.237 Run status group 0 (all jobs): 00:24:29.237 READ: bw=173MiB/s (181MB/s), 173MiB/s-173MiB/s (181MB/s-181MB/s), io=346MiB (363MB), run=2004-2004msec 00:24:29.237 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=184MiB (193MB), run=1834-1834msec 00:24:29.237 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.237 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:29.237 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:29.237 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:29.237 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:29.237 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:29.237 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:29.237 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:29.238 rmmod nvme_tcp 00:24:29.238 rmmod nvme_fabrics 00:24:29.238 rmmod nvme_keyring 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 2526996 ']' 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 2526996 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2526996 ']' 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2526996 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2526996 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2526996' 00:24:29.238 killing process with pid 2526996 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2526996 00:24:29.238 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2526996 00:24:29.497 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:29.497 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:29.497 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:29.497 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:29.497 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:24:29.497 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:29.497 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:24:29.497 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.497 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:29.497 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.497 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.497 15:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.033 15:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:32.033 00:24:32.033 real 0m16.446s 00:24:32.033 user 0m49.404s 00:24:32.033 sys 0m6.554s 00:24:32.033 15:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:32.033 15:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.033 ************************************ 00:24:32.033 END TEST nvmf_fio_host 00:24:32.033 ************************************ 00:24:32.033 15:58:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:32.033 15:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:32.033 15:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:32.033 15:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.033 ************************************ 00:24:32.033 START TEST nvmf_failover 00:24:32.033 ************************************ 00:24:32.033 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:32.033 * Looking for test storage... 00:24:32.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:32.033 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:32.033 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:24:32.033 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:32.033 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:32.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.034 --rc genhtml_branch_coverage=1 00:24:32.034 --rc genhtml_function_coverage=1 00:24:32.034 --rc genhtml_legend=1 00:24:32.034 --rc geninfo_all_blocks=1 00:24:32.034 --rc geninfo_unexecuted_blocks=1 00:24:32.034 00:24:32.034 ' 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:32.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.034 --rc genhtml_branch_coverage=1 00:24:32.034 --rc genhtml_function_coverage=1 00:24:32.034 --rc genhtml_legend=1 00:24:32.034 --rc geninfo_all_blocks=1 00:24:32.034 --rc geninfo_unexecuted_blocks=1 00:24:32.034 00:24:32.034 ' 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:32.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.034 --rc genhtml_branch_coverage=1 00:24:32.034 --rc genhtml_function_coverage=1 00:24:32.034 --rc genhtml_legend=1 00:24:32.034 --rc geninfo_all_blocks=1 00:24:32.034 --rc geninfo_unexecuted_blocks=1 00:24:32.034 00:24:32.034 ' 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:32.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.034 --rc genhtml_branch_coverage=1 00:24:32.034 --rc genhtml_function_coverage=1 00:24:32.034 --rc genhtml_legend=1 00:24:32.034 --rc geninfo_all_blocks=1 00:24:32.034 --rc geninfo_unexecuted_blocks=1 00:24:32.034 00:24:32.034 ' 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:32.034 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:32.035 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:32.035 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.035 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.035 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.035 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:32.035 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:32.035 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:32.035 15:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:38.608 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:38.608 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:38.608 Found net devices under 0000:86:00.0: cvl_0_0 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:38.608 Found net devices under 0000:86:00.1: cvl_0_1 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.608 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:24:38.609 00:24:38.609 --- 10.0.0.2 ping statistics --- 00:24:38.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.609 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:24:38.609 00:24:38.609 --- 10.0.0.1 ping statistics --- 00:24:38.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.609 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=2532152 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 2532152 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2532152 ']' 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.609 15:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:38.609 [2024-10-01 15:58:47.986094] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:24:38.609 [2024-10-01 15:58:47.986145] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.609 [2024-10-01 15:58:48.060087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:38.609 [2024-10-01 15:58:48.139231] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.609 [2024-10-01 15:58:48.139265] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.609 [2024-10-01 15:58:48.139272] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.609 [2024-10-01 15:58:48.139278] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.609 [2024-10-01 15:58:48.139283] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.609 [2024-10-01 15:58:48.139334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.609 [2024-10-01 15:58:48.139442] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.609 [2024-10-01 15:58:48.139443] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.868 15:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.868 15:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:38.868 15:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:38.868 15:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.868 15:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:38.868 15:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.868 15:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:38.868 [2024-10-01 15:58:49.038214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.127 15:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:39.127 Malloc0 00:24:39.127 15:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.385 15:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:39.643 15:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.902 [2024-10-01 15:58:49.852663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:39.902 [2024-10-01 15:58:50.045287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:39.902 15:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:40.160 [2024-10-01 15:58:50.237932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:40.160 15:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2532431 00:24:40.160 15:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:40.160 15:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:40.160 15:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2532431 /var/tmp/bdevperf.sock 00:24:40.160 15:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2532431 ']' 00:24:40.160 15:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:40.160 15:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.160 15:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:40.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:40.160 15:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.160 15:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:41.095 15:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:41.095 15:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:41.095 15:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.662 NVMe0n1 00:24:41.662 15:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.662 NVMe0n1 00:24:41.920 15:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2532697 00:24:41.920 15:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:41.920 15:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:42.857 15:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:43.116 [2024-10-01 15:58:53.053920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.053966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.053974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.053981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.053988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.053994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 [2024-10-01 15:58:53.054102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911820 is same with the state(6) to be set 00:24:43.116 15:58:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:46.401 15:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.401 NVMe0n1 00:24:46.401 15:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:46.661 [2024-10-01 15:58:56.598968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 [2024-10-01 15:58:56.599324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911cf0 is same with the state(6) to be set 00:24:46.661 15:58:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:49.950 15:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.950 [2024-10-01 15:58:59.814500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.950 15:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:50.886 15:59:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:50.886 [2024-10-01 15:59:01.033233] ctrlr.c: 834:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5bab1356-66fc-447f-8590-5e4c2bc6fa79' to connect at this address. 00:24:50.886 15:59:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2532697 00:24:57.487 { 00:24:57.487 "results": [ 00:24:57.487 { 00:24:57.487 "job": "NVMe0n1", 00:24:57.487 "core_mask": "0x1", 00:24:57.487 "workload": "verify", 00:24:57.487 "status": "finished", 00:24:57.487 "verify_range": { 00:24:57.487 "start": 0, 00:24:57.487 "length": 16384 00:24:57.487 }, 00:24:57.487 "queue_depth": 128, 00:24:57.487 "io_size": 4096, 00:24:57.487 "runtime": 15.006236, 00:24:57.487 "iops": 11380.602037712855, 00:24:57.487 "mibps": 44.45547670981584, 00:24:57.487 "io_failed": 0, 00:24:57.487 "io_timeout": 0, 00:24:57.487 "avg_latency_us": 11225.580804175797, 00:24:57.487 "min_latency_us": 1997.287619047619, 00:24:57.487 "max_latency_us": 16352.792380952382 00:24:57.487 } 00:24:57.487 ], 00:24:57.487 "core_count": 1 00:24:57.487 } 00:24:57.487 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2532431 00:24:57.487 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2532431 ']' 00:24:57.487 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2532431 00:24:57.487 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:57.487 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:57.487 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2532431 00:24:57.487 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:57.487 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:57.487 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2532431' 00:24:57.487 killing process with pid 2532431 00:24:57.487 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2532431 00:24:57.487 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2532431 00:24:57.487 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:57.487 [2024-10-01 15:58:50.314557] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:24:57.487 [2024-10-01 15:58:50.314610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2532431 ] 00:24:57.487 [2024-10-01 15:58:50.384715] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.487 [2024-10-01 15:58:50.458233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.487 Running I/O for 15 seconds... 00:24:57.487 11044.00 IOPS, 43.14 MiB/s [2024-10-01 15:58:53.055086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.487 [2024-10-01 15:58:53.055122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.487 [2024-10-01 15:58:53.055137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.487 [2024-10-01 15:58:53.055145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.487 [2024-10-01 15:58:53.055154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.487 [2024-10-01 15:58:53.055161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.487 [2024-10-01 15:58:53.055169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.487 [2024-10-01 15:58:53.055176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.487 [2024-10-01 15:58:53.055184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.487 [2024-10-01 15:58:53.055190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.487 [2024-10-01 15:58:53.055198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.487 [2024-10-01 15:58:53.055204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.487 [2024-10-01 15:58:53.055212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.487 [2024-10-01 15:58:53.055219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.487 [2024-10-01 15:58:53.055226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.487 [2024-10-01 15:58:53.055232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.487 [2024-10-01 15:58:53.055240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.488 [2024-10-01 15:58:53.055486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.488 [2024-10-01 15:58:53.055501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.488 [2024-10-01 15:58:53.055515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.488 [2024-10-01 15:58:53.055529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.488 [2024-10-01 15:58:53.055543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.488 [2024-10-01 15:58:53.055558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.488 [2024-10-01 15:58:53.055572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.488 [2024-10-01 15:58:53.055585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.488 [2024-10-01 15:58:53.055804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.488 [2024-10-01 15:58:53.055812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.488 [2024-10-01 15:58:53.055818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.055827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.055835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.055843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.055849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.055857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.055869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.055878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.055884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.055892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.055899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.055907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.055914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.055922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.055928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.055937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.055943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.055951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.055958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.055966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.055972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.055980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.055986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.055994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.489 [2024-10-01 15:58:53.056395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.489 [2024-10-01 15:58:53.056402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.490 [2024-10-01 15:58:53.056737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.490 [2024-10-01 15:58:53.056787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97760 len:8 PRP1 0x0 PRP2 0x0 00:24:57.490 [2024-10-01 15:58:53.056793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.490 [2024-10-01 15:58:53.056808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.490 [2024-10-01 15:58:53.056813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97768 len:8 PRP1 0x0 PRP2 0x0 00:24:57.490 [2024-10-01 15:58:53.056819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.490 [2024-10-01 15:58:53.056831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.490 [2024-10-01 15:58:53.056836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97776 len:8 PRP1 0x0 PRP2 0x0 00:24:57.490 [2024-10-01 15:58:53.056842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.490 [2024-10-01 15:58:53.056855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.490 [2024-10-01 15:58:53.056861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97784 len:8 PRP1 0x0 PRP2 0x0 00:24:57.490 [2024-10-01 15:58:53.056871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.490 [2024-10-01 15:58:53.056882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.490 [2024-10-01 15:58:53.056888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97792 len:8 PRP1 0x0 PRP2 0x0 00:24:57.490 [2024-10-01 15:58:53.056894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.490 [2024-10-01 15:58:53.056906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.490 [2024-10-01 15:58:53.056911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97800 len:8 PRP1 0x0 PRP2 0x0 00:24:57.490 [2024-10-01 15:58:53.056917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.490 [2024-10-01 15:58:53.056928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.490 [2024-10-01 15:58:53.056934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97808 len:8 PRP1 0x0 PRP2 0x0 00:24:57.490 [2024-10-01 15:58:53.056940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.490 [2024-10-01 15:58:53.056951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.490 [2024-10-01 15:58:53.056956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97816 len:8 PRP1 0x0 PRP2 0x0 00:24:57.490 [2024-10-01 15:58:53.056964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.490 [2024-10-01 15:58:53.056976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.490 [2024-10-01 15:58:53.056982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97824 len:8 PRP1 0x0 PRP2 0x0 00:24:57.490 [2024-10-01 15:58:53.056988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.490 [2024-10-01 15:58:53.056995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.490 [2024-10-01 15:58:53.056999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.490 [2024-10-01 15:58:53.057005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97832 len:8 PRP1 0x0 PRP2 0x0 00:24:57.490 [2024-10-01 15:58:53.057011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.491 [2024-10-01 15:58:53.057017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.491 [2024-10-01 15:58:53.057022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.491 [2024-10-01 15:58:53.057027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97840 len:8 PRP1 0x0 PRP2 0x0 00:24:57.491 [2024-10-01 15:58:53.057033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.491 [2024-10-01 15:58:53.057039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.491 [2024-10-01 15:58:53.057044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.491 [2024-10-01 15:58:53.057049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97848 len:8 PRP1 0x0 PRP2 0x0 00:24:57.491 [2024-10-01 15:58:53.057055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.491 [2024-10-01 15:58:53.057062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.491 [2024-10-01 15:58:53.057067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.491 [2024-10-01 15:58:53.057072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97856 len:8 PRP1 0x0 PRP2 0x0 00:24:57.491 [2024-10-01 15:58:53.057078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.491 [2024-10-01 15:58:53.057084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.491 [2024-10-01 15:58:53.057089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.491 [2024-10-01 15:58:53.057094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97152 len:8 PRP1 0x0 PRP2 0x0 00:24:57.491 [2024-10-01 15:58:53.057100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.491 [2024-10-01 15:58:53.057106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.491 [2024-10-01 15:58:53.057111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.491 [2024-10-01 15:58:53.057117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97160 len:8 PRP1 0x0 PRP2 0x0 00:24:57.491 [2024-10-01 15:58:53.057123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.491 [2024-10-01 15:58:53.057129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.491 [2024-10-01 15:58:53.057134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.491 [2024-10-01 15:58:53.057140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97168 len:8 PRP1 0x0 PRP2 0x0 00:24:57.491 [2024-10-01 15:58:53.057147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.491 [2024-10-01 15:58:53.057187] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x992ec0 was disconnected and freed. reset controller. 00:24:57.491 [2024-10-01 15:58:53.057250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.491 [2024-10-01 15:58:53.057261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.491 [2024-10-01 15:58:53.057269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.491 [2024-10-01 15:58:53.057275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.491 [2024-10-01 15:58:53.057282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.491 [2024-10-01 15:58:53.057289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.491 [2024-10-01 15:58:53.057296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.491 [2024-10-01 15:58:53.057302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.491 [2024-10-01 15:58:53.057309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.491 [2024-10-01 15:58:53.058255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.491 [2024-10-01 15:58:53.058283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.491 [2024-10-01 15:58:53.058489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.491 [2024-10-01 15:58:53.058504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.491 [2024-10-01 15:58:53.058513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.491 [2024-10-01 15:58:53.058525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.491 [2024-10-01 15:58:53.058536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.491 [2024-10-01 15:58:53.058543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.491 [2024-10-01 15:58:53.058551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.491 [2024-10-01 15:58:53.058568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.491 [2024-10-01 15:58:53.070223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.491 [2024-10-01 15:58:53.070507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.491 [2024-10-01 15:58:53.070524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.491 [2024-10-01 15:58:53.070532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.491 [2024-10-01 15:58:53.070663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.491 [2024-10-01 15:58:53.070804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.491 [2024-10-01 15:58:53.070813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.491 [2024-10-01 15:58:53.070825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.491 [2024-10-01 15:58:53.070857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.491 [2024-10-01 15:58:53.081027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.491 [2024-10-01 15:58:53.081278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.491 [2024-10-01 15:58:53.081295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.491 [2024-10-01 15:58:53.081303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.491 [2024-10-01 15:58:53.081315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.491 [2024-10-01 15:58:53.081326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.491 [2024-10-01 15:58:53.081333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.491 [2024-10-01 15:58:53.081339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.491 [2024-10-01 15:58:53.081352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.491 [2024-10-01 15:58:53.093869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.491 [2024-10-01 15:58:53.094049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.491 [2024-10-01 15:58:53.094064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.491 [2024-10-01 15:58:53.094072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.491 [2024-10-01 15:58:53.094083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.491 [2024-10-01 15:58:53.094094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.491 [2024-10-01 15:58:53.094101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.491 [2024-10-01 15:58:53.094107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.491 [2024-10-01 15:58:53.094120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.491 [2024-10-01 15:58:53.106146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.491 [2024-10-01 15:58:53.106414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.491 [2024-10-01 15:58:53.106431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.491 [2024-10-01 15:58:53.106440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.491 [2024-10-01 15:58:53.106594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.491 [2024-10-01 15:58:53.106796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.491 [2024-10-01 15:58:53.106808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.491 [2024-10-01 15:58:53.106815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.491 [2024-10-01 15:58:53.106849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.491 [2024-10-01 15:58:53.117059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.492 [2024-10-01 15:58:53.117321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.492 [2024-10-01 15:58:53.117337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.492 [2024-10-01 15:58:53.117345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.492 [2024-10-01 15:58:53.117357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.492 [2024-10-01 15:58:53.117368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.492 [2024-10-01 15:58:53.117374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.492 [2024-10-01 15:58:53.117381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.492 [2024-10-01 15:58:53.117394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.492 [2024-10-01 15:58:53.128070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.492 [2024-10-01 15:58:53.128240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.492 [2024-10-01 15:58:53.128254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.492 [2024-10-01 15:58:53.128262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.492 [2024-10-01 15:58:53.128274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.492 [2024-10-01 15:58:53.128286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.492 [2024-10-01 15:58:53.128292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.492 [2024-10-01 15:58:53.128299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.492 [2024-10-01 15:58:53.128313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.492 [2024-10-01 15:58:53.139980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.492 [2024-10-01 15:58:53.140266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.492 [2024-10-01 15:58:53.140283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.492 [2024-10-01 15:58:53.140291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.492 [2024-10-01 15:58:53.140452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.492 [2024-10-01 15:58:53.140606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.492 [2024-10-01 15:58:53.140616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.492 [2024-10-01 15:58:53.140625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.492 [2024-10-01 15:58:53.140658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.492 [2024-10-01 15:58:53.151485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.492 [2024-10-01 15:58:53.151668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.492 [2024-10-01 15:58:53.151683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.492 [2024-10-01 15:58:53.151691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.492 [2024-10-01 15:58:53.151706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.492 [2024-10-01 15:58:53.151717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.492 [2024-10-01 15:58:53.151723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.492 [2024-10-01 15:58:53.151730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.492 [2024-10-01 15:58:53.151743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.492 [2024-10-01 15:58:53.162962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.492 [2024-10-01 15:58:53.163140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.492 [2024-10-01 15:58:53.163155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.492 [2024-10-01 15:58:53.163163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.492 [2024-10-01 15:58:53.163176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.492 [2024-10-01 15:58:53.163187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.492 [2024-10-01 15:58:53.163193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.492 [2024-10-01 15:58:53.163199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.492 [2024-10-01 15:58:53.163212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.492 [2024-10-01 15:58:53.174931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.492 [2024-10-01 15:58:53.175243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.492 [2024-10-01 15:58:53.175261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.492 [2024-10-01 15:58:53.175269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.492 [2024-10-01 15:58:53.175305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.492 [2024-10-01 15:58:53.175318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.492 [2024-10-01 15:58:53.175324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.492 [2024-10-01 15:58:53.175331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.492 [2024-10-01 15:58:53.175461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.492 [2024-10-01 15:58:53.186077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.492 [2024-10-01 15:58:53.186306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.492 [2024-10-01 15:58:53.186322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.492 [2024-10-01 15:58:53.186330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.492 [2024-10-01 15:58:53.186476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.492 [2024-10-01 15:58:53.186671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.492 [2024-10-01 15:58:53.186683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.492 [2024-10-01 15:58:53.186689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.492 [2024-10-01 15:58:53.186717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.492 [2024-10-01 15:58:53.196989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.492 [2024-10-01 15:58:53.197218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.492 [2024-10-01 15:58:53.197235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.492 [2024-10-01 15:58:53.197243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.492 [2024-10-01 15:58:53.197255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.492 [2024-10-01 15:58:53.197266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.492 [2024-10-01 15:58:53.197272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.492 [2024-10-01 15:58:53.197279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.492 [2024-10-01 15:58:53.197292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.492 [2024-10-01 15:58:53.207635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.492 [2024-10-01 15:58:53.207809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.492 [2024-10-01 15:58:53.207823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.492 [2024-10-01 15:58:53.207830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.492 [2024-10-01 15:58:53.207842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.492 [2024-10-01 15:58:53.207852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.492 [2024-10-01 15:58:53.207859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.492 [2024-10-01 15:58:53.207872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.492 [2024-10-01 15:58:53.207885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.492 [2024-10-01 15:58:53.218341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.492 [2024-10-01 15:58:53.218518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.492 [2024-10-01 15:58:53.218533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.492 [2024-10-01 15:58:53.218541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.492 [2024-10-01 15:58:53.218553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.492 [2024-10-01 15:58:53.218564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.492 [2024-10-01 15:58:53.218571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.493 [2024-10-01 15:58:53.218577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.493 [2024-10-01 15:58:53.218707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.493 [2024-10-01 15:58:53.228985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.493 [2024-10-01 15:58:53.229236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.493 [2024-10-01 15:58:53.229254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.493 [2024-10-01 15:58:53.229262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.493 [2024-10-01 15:58:53.229274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.493 [2024-10-01 15:58:53.229285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.493 [2024-10-01 15:58:53.229291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.493 [2024-10-01 15:58:53.229298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.493 [2024-10-01 15:58:53.229311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.493 [2024-10-01 15:58:53.241599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.493 [2024-10-01 15:58:53.241830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.493 [2024-10-01 15:58:53.241846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.493 [2024-10-01 15:58:53.241853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.493 [2024-10-01 15:58:53.241871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.493 [2024-10-01 15:58:53.241882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.493 [2024-10-01 15:58:53.241888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.493 [2024-10-01 15:58:53.241895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.493 [2024-10-01 15:58:53.241908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.493 [2024-10-01 15:58:53.253378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.493 [2024-10-01 15:58:53.253501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.493 [2024-10-01 15:58:53.253516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.493 [2024-10-01 15:58:53.253524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.493 [2024-10-01 15:58:53.253535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.493 [2024-10-01 15:58:53.253546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.493 [2024-10-01 15:58:53.253553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.493 [2024-10-01 15:58:53.253559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.493 [2024-10-01 15:58:53.253572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.493 [2024-10-01 15:58:53.264469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.493 [2024-10-01 15:58:53.264576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.493 [2024-10-01 15:58:53.264591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.493 [2024-10-01 15:58:53.264598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.493 [2024-10-01 15:58:53.264610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.493 [2024-10-01 15:58:53.264624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.493 [2024-10-01 15:58:53.264631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.493 [2024-10-01 15:58:53.264637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.493 [2024-10-01 15:58:53.264650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.493 [2024-10-01 15:58:53.274900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.493 [2024-10-01 15:58:53.275075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.493 [2024-10-01 15:58:53.275089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.493 [2024-10-01 15:58:53.275096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.493 [2024-10-01 15:58:53.275108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.493 [2024-10-01 15:58:53.275119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.493 [2024-10-01 15:58:53.275125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.493 [2024-10-01 15:58:53.275132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.493 [2024-10-01 15:58:53.275144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.493 [2024-10-01 15:58:53.286237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.493 [2024-10-01 15:58:53.286380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.493 [2024-10-01 15:58:53.286395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.493 [2024-10-01 15:58:53.286402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.493 [2024-10-01 15:58:53.286415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.493 [2024-10-01 15:58:53.286426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.493 [2024-10-01 15:58:53.286432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.493 [2024-10-01 15:58:53.286439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.493 [2024-10-01 15:58:53.286453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.493 [2024-10-01 15:58:53.297394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.493 [2024-10-01 15:58:53.297578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.493 [2024-10-01 15:58:53.297594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.493 [2024-10-01 15:58:53.297601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.493 [2024-10-01 15:58:53.297612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.493 [2024-10-01 15:58:53.297624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.493 [2024-10-01 15:58:53.297630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.493 [2024-10-01 15:58:53.297637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.493 [2024-10-01 15:58:53.297650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.493 [2024-10-01 15:58:53.309976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.493 [2024-10-01 15:58:53.310242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.493 [2024-10-01 15:58:53.310260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.493 [2024-10-01 15:58:53.310268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.493 [2024-10-01 15:58:53.310619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.493 [2024-10-01 15:58:53.310775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.493 [2024-10-01 15:58:53.310786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.493 [2024-10-01 15:58:53.310793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.493 [2024-10-01 15:58:53.310824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.493 [2024-10-01 15:58:53.321211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.493 [2024-10-01 15:58:53.321476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.493 [2024-10-01 15:58:53.321494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.493 [2024-10-01 15:58:53.321503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.493 [2024-10-01 15:58:53.321532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.493 [2024-10-01 15:58:53.321544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.493 [2024-10-01 15:58:53.321551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.493 [2024-10-01 15:58:53.321557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.493 [2024-10-01 15:58:53.321570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.493 [2024-10-01 15:58:53.332686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.493 [2024-10-01 15:58:53.332809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.493 [2024-10-01 15:58:53.332823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.493 [2024-10-01 15:58:53.332831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.493 [2024-10-01 15:58:53.333086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.493 [2024-10-01 15:58:53.333229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.493 [2024-10-01 15:58:53.333240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.493 [2024-10-01 15:58:53.333246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.493 [2024-10-01 15:58:53.333385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.493 [2024-10-01 15:58:53.344309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.493 [2024-10-01 15:58:53.344432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.494 [2024-10-01 15:58:53.344446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.494 [2024-10-01 15:58:53.344457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.494 [2024-10-01 15:58:53.344469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.494 [2024-10-01 15:58:53.344479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.494 [2024-10-01 15:58:53.344485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.494 [2024-10-01 15:58:53.344492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.494 [2024-10-01 15:58:53.344505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.494 [2024-10-01 15:58:53.356675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.494 [2024-10-01 15:58:53.356954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.494 [2024-10-01 15:58:53.356972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.494 [2024-10-01 15:58:53.356980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.494 [2024-10-01 15:58:53.357122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.494 [2024-10-01 15:58:53.357152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.494 [2024-10-01 15:58:53.357159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.494 [2024-10-01 15:58:53.357166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.494 [2024-10-01 15:58:53.357179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.494 [2024-10-01 15:58:53.366774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.494 [2024-10-01 15:58:53.366886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.494 [2024-10-01 15:58:53.366901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.494 [2024-10-01 15:58:53.366909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.494 [2024-10-01 15:58:53.366920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.494 [2024-10-01 15:58:53.366931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.494 [2024-10-01 15:58:53.366937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.494 [2024-10-01 15:58:53.366944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.494 [2024-10-01 15:58:53.366957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.494 [2024-10-01 15:58:53.379566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.494 [2024-10-01 15:58:53.379972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.494 [2024-10-01 15:58:53.379991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.494 [2024-10-01 15:58:53.379999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.494 [2024-10-01 15:58:53.380142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.494 [2024-10-01 15:58:53.380490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.494 [2024-10-01 15:58:53.380502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.494 [2024-10-01 15:58:53.380512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.494 [2024-10-01 15:58:53.380668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.494 [2024-10-01 15:58:53.390047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.494 [2024-10-01 15:58:53.390183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.494 [2024-10-01 15:58:53.390198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.494 [2024-10-01 15:58:53.390206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.494 [2024-10-01 15:58:53.390217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.494 [2024-10-01 15:58:53.390228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.494 [2024-10-01 15:58:53.390235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.494 [2024-10-01 15:58:53.390241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.494 [2024-10-01 15:58:53.390256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.494 [2024-10-01 15:58:53.401523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.494 [2024-10-01 15:58:53.401630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.494 [2024-10-01 15:58:53.401644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.494 [2024-10-01 15:58:53.401652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.494 [2024-10-01 15:58:53.401663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.494 [2024-10-01 15:58:53.401674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.494 [2024-10-01 15:58:53.401680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.494 [2024-10-01 15:58:53.401687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.494 [2024-10-01 15:58:53.401700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.494 [2024-10-01 15:58:53.414285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.494 [2024-10-01 15:58:53.414639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.494 [2024-10-01 15:58:53.414657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.494 [2024-10-01 15:58:53.414665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.494 [2024-10-01 15:58:53.414779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.494 [2024-10-01 15:58:53.414961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.494 [2024-10-01 15:58:53.414972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.494 [2024-10-01 15:58:53.414979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.494 [2024-10-01 15:58:53.415009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.494 [2024-10-01 15:58:53.425231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.494 [2024-10-01 15:58:53.425374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.494 [2024-10-01 15:58:53.425390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.494 [2024-10-01 15:58:53.425398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.494 [2024-10-01 15:58:53.425410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.494 [2024-10-01 15:58:53.425420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.494 [2024-10-01 15:58:53.425426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.494 [2024-10-01 15:58:53.425433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.494 [2024-10-01 15:58:53.425625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.494 [2024-10-01 15:58:53.436217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.494 [2024-10-01 15:58:53.436372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.494 [2024-10-01 15:58:53.436388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.494 [2024-10-01 15:58:53.436395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.494 [2024-10-01 15:58:53.436555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.494 [2024-10-01 15:58:53.436589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.494 [2024-10-01 15:58:53.436596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.494 [2024-10-01 15:58:53.436602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.494 [2024-10-01 15:58:53.436616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.494 [2024-10-01 15:58:53.447857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.494 [2024-10-01 15:58:53.448118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.494 [2024-10-01 15:58:53.448135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.494 [2024-10-01 15:58:53.448143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.494 [2024-10-01 15:58:53.448172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.494 [2024-10-01 15:58:53.448184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.494 [2024-10-01 15:58:53.448190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.494 [2024-10-01 15:58:53.448197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.494 [2024-10-01 15:58:53.448211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.494 [2024-10-01 15:58:53.459761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.494 [2024-10-01 15:58:53.459897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.494 [2024-10-01 15:58:53.459913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.494 [2024-10-01 15:58:53.459921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.494 [2024-10-01 15:58:53.459936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.495 [2024-10-01 15:58:53.459947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.495 [2024-10-01 15:58:53.459953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.495 [2024-10-01 15:58:53.459960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.495 [2024-10-01 15:58:53.459972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.495 [2024-10-01 15:58:53.471739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.495 [2024-10-01 15:58:53.471886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.495 [2024-10-01 15:58:53.471903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.495 [2024-10-01 15:58:53.471910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.495 [2024-10-01 15:58:53.472247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.495 [2024-10-01 15:58:53.472403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.495 [2024-10-01 15:58:53.472414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.495 [2024-10-01 15:58:53.472420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.495 [2024-10-01 15:58:53.472452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.495 [2024-10-01 15:58:53.482980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.495 [2024-10-01 15:58:53.483109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.495 [2024-10-01 15:58:53.483125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.495 [2024-10-01 15:58:53.483132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.495 [2024-10-01 15:58:53.483468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.495 [2024-10-01 15:58:53.483623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.495 [2024-10-01 15:58:53.483633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.495 [2024-10-01 15:58:53.483640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.495 [2024-10-01 15:58:53.483672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.495 [2024-10-01 15:58:53.493047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.495 [2024-10-01 15:58:53.493177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.495 [2024-10-01 15:58:53.493192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.495 [2024-10-01 15:58:53.493199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.495 [2024-10-01 15:58:53.493335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.495 [2024-10-01 15:58:53.493407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.495 [2024-10-01 15:58:53.493416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.495 [2024-10-01 15:58:53.493427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.495 [2024-10-01 15:58:53.493451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.495 [2024-10-01 15:58:53.503985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.495 [2024-10-01 15:58:53.504195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.495 [2024-10-01 15:58:53.504211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.495 [2024-10-01 15:58:53.504218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.495 [2024-10-01 15:58:53.504393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.495 [2024-10-01 15:58:53.504609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.495 [2024-10-01 15:58:53.504621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.495 [2024-10-01 15:58:53.504627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.495 [2024-10-01 15:58:53.504657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.495 [2024-10-01 15:58:53.514340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.495 [2024-10-01 15:58:53.514462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.495 [2024-10-01 15:58:53.514477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.495 [2024-10-01 15:58:53.514485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.495 [2024-10-01 15:58:53.514497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.495 [2024-10-01 15:58:53.514508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.495 [2024-10-01 15:58:53.514514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.495 [2024-10-01 15:58:53.514521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.495 [2024-10-01 15:58:53.514534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.495 [2024-10-01 15:58:53.526330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.495 [2024-10-01 15:58:53.526463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.495 [2024-10-01 15:58:53.526477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.495 [2024-10-01 15:58:53.526485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.495 [2024-10-01 15:58:53.526497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.495 [2024-10-01 15:58:53.526508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.495 [2024-10-01 15:58:53.526514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.495 [2024-10-01 15:58:53.526520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.495 [2024-10-01 15:58:53.526533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.495 [2024-10-01 15:58:53.536720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.495 [2024-10-01 15:58:53.536841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.495 [2024-10-01 15:58:53.536860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.495 [2024-10-01 15:58:53.536872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.495 [2024-10-01 15:58:53.536884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.495 [2024-10-01 15:58:53.536895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.495 [2024-10-01 15:58:53.536901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.495 [2024-10-01 15:58:53.536907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.495 [2024-10-01 15:58:53.536920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.495 [2024-10-01 15:58:53.548697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.495 [2024-10-01 15:58:53.548872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.495 [2024-10-01 15:58:53.548886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.495 [2024-10-01 15:58:53.548894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.495 [2024-10-01 15:58:53.548905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.495 [2024-10-01 15:58:53.548916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.495 [2024-10-01 15:58:53.548922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.495 [2024-10-01 15:58:53.548929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.495 [2024-10-01 15:58:53.548942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.495 [2024-10-01 15:58:53.559310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.495 [2024-10-01 15:58:53.559504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.495 [2024-10-01 15:58:53.559519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.495 [2024-10-01 15:58:53.559526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.495 [2024-10-01 15:58:53.559538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.495 [2024-10-01 15:58:53.559548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.495 [2024-10-01 15:58:53.559554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.495 [2024-10-01 15:58:53.559561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.495 [2024-10-01 15:58:53.559574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.495 [2024-10-01 15:58:53.569845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.495 [2024-10-01 15:58:53.570097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.495 [2024-10-01 15:58:53.570113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.495 [2024-10-01 15:58:53.570121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.495 [2024-10-01 15:58:53.570250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.495 [2024-10-01 15:58:53.570396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.495 [2024-10-01 15:58:53.570407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.495 [2024-10-01 15:58:53.570414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.496 [2024-10-01 15:58:53.570443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.496 [2024-10-01 15:58:53.580847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.496 [2024-10-01 15:58:53.581103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.496 [2024-10-01 15:58:53.581119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.496 [2024-10-01 15:58:53.581126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.496 [2024-10-01 15:58:53.581139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.496 [2024-10-01 15:58:53.581150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.496 [2024-10-01 15:58:53.581156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.496 [2024-10-01 15:58:53.581163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.496 [2024-10-01 15:58:53.581176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.496 [2024-10-01 15:58:53.592101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.496 [2024-10-01 15:58:53.592344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.496 [2024-10-01 15:58:53.592359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.496 [2024-10-01 15:58:53.592367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.496 [2024-10-01 15:58:53.592380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.496 [2024-10-01 15:58:53.592390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.496 [2024-10-01 15:58:53.592397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.496 [2024-10-01 15:58:53.592403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.496 [2024-10-01 15:58:53.592416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.496 [2024-10-01 15:58:53.603166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.496 [2024-10-01 15:58:53.603383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.496 [2024-10-01 15:58:53.603400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.496 [2024-10-01 15:58:53.603407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.496 [2024-10-01 15:58:53.603536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.496 [2024-10-01 15:58:53.603574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.496 [2024-10-01 15:58:53.603582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.496 [2024-10-01 15:58:53.603589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.496 [2024-10-01 15:58:53.603716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.496 [2024-10-01 15:58:53.613766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.496 [2024-10-01 15:58:53.613921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.496 [2024-10-01 15:58:53.613937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.496 [2024-10-01 15:58:53.613944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.496 [2024-10-01 15:58:53.613956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.496 [2024-10-01 15:58:53.613966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.496 [2024-10-01 15:58:53.613973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.496 [2024-10-01 15:58:53.613979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.496 [2024-10-01 15:58:53.613992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.496 [2024-10-01 15:58:53.626698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.496 [2024-10-01 15:58:53.627410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.496 [2024-10-01 15:58:53.627430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.496 [2024-10-01 15:58:53.627438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.496 [2024-10-01 15:58:53.627738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.496 [2024-10-01 15:58:53.627900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.496 [2024-10-01 15:58:53.627911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.496 [2024-10-01 15:58:53.627917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.496 [2024-10-01 15:58:53.627949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.496 [2024-10-01 15:58:53.637756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.496 [2024-10-01 15:58:53.638105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.496 [2024-10-01 15:58:53.638123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.496 [2024-10-01 15:58:53.638130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.496 [2024-10-01 15:58:53.638274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.496 [2024-10-01 15:58:53.638303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.496 [2024-10-01 15:58:53.638310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.496 [2024-10-01 15:58:53.638317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.496 [2024-10-01 15:58:53.638331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.496 [2024-10-01 15:58:53.648718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.496 [2024-10-01 15:58:53.649084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.496 [2024-10-01 15:58:53.649102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.496 [2024-10-01 15:58:53.649113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.496 [2024-10-01 15:58:53.649256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.496 [2024-10-01 15:58:53.649282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.496 [2024-10-01 15:58:53.649289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.496 [2024-10-01 15:58:53.649295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.496 [2024-10-01 15:58:53.649309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.496 [2024-10-01 15:58:53.660263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.496 [2024-10-01 15:58:53.660432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.496 [2024-10-01 15:58:53.660446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.496 [2024-10-01 15:58:53.660454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.496 [2024-10-01 15:58:53.660466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.496 [2024-10-01 15:58:53.660477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.496 [2024-10-01 15:58:53.660483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.496 [2024-10-01 15:58:53.660489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.496 [2024-10-01 15:58:53.660502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.496 [2024-10-01 15:58:53.671879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.496 [2024-10-01 15:58:53.672104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.496 [2024-10-01 15:58:53.672119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.496 [2024-10-01 15:58:53.672127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.496 [2024-10-01 15:58:53.672138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.496 [2024-10-01 15:58:53.672149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.496 [2024-10-01 15:58:53.672156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.496 [2024-10-01 15:58:53.672162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.497 [2024-10-01 15:58:53.672176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.497 [2024-10-01 15:58:53.683634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.497 [2024-10-01 15:58:53.683861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.497 [2024-10-01 15:58:53.683881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.497 [2024-10-01 15:58:53.683889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.497 [2024-10-01 15:58:53.683901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.497 [2024-10-01 15:58:53.683912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.497 [2024-10-01 15:58:53.683921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.497 [2024-10-01 15:58:53.683928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.497 [2024-10-01 15:58:53.683941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.497 [2024-10-01 15:58:53.695329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.497 [2024-10-01 15:58:53.695553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.497 [2024-10-01 15:58:53.695569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.497 [2024-10-01 15:58:53.695576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.497 [2024-10-01 15:58:53.695588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.497 [2024-10-01 15:58:53.695598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.497 [2024-10-01 15:58:53.695605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.497 [2024-10-01 15:58:53.695612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.497 [2024-10-01 15:58:53.695625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.497 [2024-10-01 15:58:53.706983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.497 [2024-10-01 15:58:53.707204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.497 [2024-10-01 15:58:53.707219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.497 [2024-10-01 15:58:53.707227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.497 [2024-10-01 15:58:53.707239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.497 [2024-10-01 15:58:53.707250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.497 [2024-10-01 15:58:53.707256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.497 [2024-10-01 15:58:53.707262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.497 [2024-10-01 15:58:53.707275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.497 [2024-10-01 15:58:53.719010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.497 [2024-10-01 15:58:53.719206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.497 [2024-10-01 15:58:53.719229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.497 [2024-10-01 15:58:53.719237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.497 [2024-10-01 15:58:53.719572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.497 [2024-10-01 15:58:53.719738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.497 [2024-10-01 15:58:53.719749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.497 [2024-10-01 15:58:53.719755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.497 [2024-10-01 15:58:53.719904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.497 [2024-10-01 15:58:53.730211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.497 [2024-10-01 15:58:53.730411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.497 [2024-10-01 15:58:53.730435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.497 [2024-10-01 15:58:53.730443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.497 [2024-10-01 15:58:53.730572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.497 [2024-10-01 15:58:53.730601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.497 [2024-10-01 15:58:53.730609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.497 [2024-10-01 15:58:53.730615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.497 [2024-10-01 15:58:53.730629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.497 [2024-10-01 15:58:53.741129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.497 [2024-10-01 15:58:53.741371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.497 [2024-10-01 15:58:53.741388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.497 [2024-10-01 15:58:53.741396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.497 [2024-10-01 15:58:53.741525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.497 [2024-10-01 15:58:53.741563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.497 [2024-10-01 15:58:53.741571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.497 [2024-10-01 15:58:53.741577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.497 [2024-10-01 15:58:53.741706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.497 [2024-10-01 15:58:53.751779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.497 [2024-10-01 15:58:53.752029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.497 [2024-10-01 15:58:53.752048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.497 [2024-10-01 15:58:53.752055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.497 [2024-10-01 15:58:53.752147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.497 [2024-10-01 15:58:53.752211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.497 [2024-10-01 15:58:53.752218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.497 [2024-10-01 15:58:53.752224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.497 [2024-10-01 15:58:53.752349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.497 [2024-10-01 15:58:53.762156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.497 [2024-10-01 15:58:53.762326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.497 [2024-10-01 15:58:53.762341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.497 [2024-10-01 15:58:53.762349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.497 [2024-10-01 15:58:53.762364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.497 [2024-10-01 15:58:53.762375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.497 [2024-10-01 15:58:53.762381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.497 [2024-10-01 15:58:53.762387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.497 [2024-10-01 15:58:53.762402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.497 [2024-10-01 15:58:53.772222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.497 [2024-10-01 15:58:53.772456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.497 [2024-10-01 15:58:53.772471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.497 [2024-10-01 15:58:53.772478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.497 [2024-10-01 15:58:53.772490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.497 [2024-10-01 15:58:53.772501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.497 [2024-10-01 15:58:53.772507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.497 [2024-10-01 15:58:53.772513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.497 [2024-10-01 15:58:53.772527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.497 [2024-10-01 15:58:53.782961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.497 [2024-10-01 15:58:53.783154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.497 [2024-10-01 15:58:53.783168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.497 [2024-10-01 15:58:53.783175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.497 [2024-10-01 15:58:53.783188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.497 [2024-10-01 15:58:53.783198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.497 [2024-10-01 15:58:53.783205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.497 [2024-10-01 15:58:53.783211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.497 [2024-10-01 15:58:53.783224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.497 [2024-10-01 15:58:53.794984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.497 [2024-10-01 15:58:53.795146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.498 [2024-10-01 15:58:53.795160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.498 [2024-10-01 15:58:53.795167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.498 [2024-10-01 15:58:53.795179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.498 [2024-10-01 15:58:53.795190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.498 [2024-10-01 15:58:53.795197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.498 [2024-10-01 15:58:53.795206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.498 [2024-10-01 15:58:53.795219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.498 [2024-10-01 15:58:53.805050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.498 [2024-10-01 15:58:53.805299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.498 [2024-10-01 15:58:53.805315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.498 [2024-10-01 15:58:53.805322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.498 [2024-10-01 15:58:53.805334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.498 [2024-10-01 15:58:53.805344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.498 [2024-10-01 15:58:53.805351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.498 [2024-10-01 15:58:53.805358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.498 [2024-10-01 15:58:53.805370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.498 [2024-10-01 15:58:53.815872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.498 [2024-10-01 15:58:53.816117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.498 [2024-10-01 15:58:53.816132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.498 [2024-10-01 15:58:53.816139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.498 [2024-10-01 15:58:53.816152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.498 [2024-10-01 15:58:53.816169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.498 [2024-10-01 15:58:53.816175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.498 [2024-10-01 15:58:53.816182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.498 [2024-10-01 15:58:53.816195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.498 [2024-10-01 15:58:53.826662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.498 [2024-10-01 15:58:53.826856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.498 [2024-10-01 15:58:53.826878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.498 [2024-10-01 15:58:53.826886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.498 [2024-10-01 15:58:53.827016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.498 [2024-10-01 15:58:53.827047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.498 [2024-10-01 15:58:53.827055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.498 [2024-10-01 15:58:53.827062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.498 [2024-10-01 15:58:53.827076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.498 [2024-10-01 15:58:53.837983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.498 [2024-10-01 15:58:53.838313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.498 [2024-10-01 15:58:53.838335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.498 [2024-10-01 15:58:53.838343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.498 [2024-10-01 15:58:53.838372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.498 [2024-10-01 15:58:53.838383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.498 [2024-10-01 15:58:53.838389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.498 [2024-10-01 15:58:53.838396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.498 [2024-10-01 15:58:53.838649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.498 [2024-10-01 15:58:53.851357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.498 [2024-10-01 15:58:53.852130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.498 [2024-10-01 15:58:53.852150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.498 [2024-10-01 15:58:53.852159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.498 [2024-10-01 15:58:53.852442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.498 [2024-10-01 15:58:53.852490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.498 [2024-10-01 15:58:53.852498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.498 [2024-10-01 15:58:53.852505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.498 [2024-10-01 15:58:53.852519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.498 [2024-10-01 15:58:53.862237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.498 [2024-10-01 15:58:53.862487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.498 [2024-10-01 15:58:53.862503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.498 [2024-10-01 15:58:53.862511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.498 [2024-10-01 15:58:53.862523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.498 [2024-10-01 15:58:53.862534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.498 [2024-10-01 15:58:53.862540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.498 [2024-10-01 15:58:53.862547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.498 [2024-10-01 15:58:53.862560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.498 [2024-10-01 15:58:53.874102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.498 [2024-10-01 15:58:53.874327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.498 [2024-10-01 15:58:53.874342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.498 [2024-10-01 15:58:53.874350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.498 [2024-10-01 15:58:53.874362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.498 [2024-10-01 15:58:53.874376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.498 [2024-10-01 15:58:53.874382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.498 [2024-10-01 15:58:53.874388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.498 [2024-10-01 15:58:53.874401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.498 [2024-10-01 15:58:53.885860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.498 [2024-10-01 15:58:53.886109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.498 [2024-10-01 15:58:53.886125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.498 [2024-10-01 15:58:53.886133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.498 [2024-10-01 15:58:53.886145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.498 [2024-10-01 15:58:53.886156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.498 [2024-10-01 15:58:53.886162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.498 [2024-10-01 15:58:53.886168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.498 [2024-10-01 15:58:53.886181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.498 [2024-10-01 15:58:53.897419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.498 [2024-10-01 15:58:53.897692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.498 [2024-10-01 15:58:53.897708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.498 [2024-10-01 15:58:53.897716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.498 [2024-10-01 15:58:53.898590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.498 [2024-10-01 15:58:53.899131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.498 [2024-10-01 15:58:53.899143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.498 [2024-10-01 15:58:53.899150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.498 [2024-10-01 15:58:53.899343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.498 [2024-10-01 15:58:53.909840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.498 [2024-10-01 15:58:53.910218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.498 [2024-10-01 15:58:53.910237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.498 [2024-10-01 15:58:53.910245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.498 [2024-10-01 15:58:53.910418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.499 [2024-10-01 15:58:53.910563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.499 [2024-10-01 15:58:53.910573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.499 [2024-10-01 15:58:53.910579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.499 [2024-10-01 15:58:53.910611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.499 [2024-10-01 15:58:53.920521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.499 [2024-10-01 15:58:53.920825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.499 [2024-10-01 15:58:53.920843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.499 [2024-10-01 15:58:53.920851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.499 [2024-10-01 15:58:53.920998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.499 [2024-10-01 15:58:53.921037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.499 [2024-10-01 15:58:53.921045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.499 [2024-10-01 15:58:53.921051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.499 [2024-10-01 15:58:53.921065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.499 [2024-10-01 15:58:53.931073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.499 [2024-10-01 15:58:53.931318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.499 [2024-10-01 15:58:53.931334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.499 [2024-10-01 15:58:53.931341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.499 [2024-10-01 15:58:53.931353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.499 [2024-10-01 15:58:53.931364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.499 [2024-10-01 15:58:53.931370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.499 [2024-10-01 15:58:53.931376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.499 [2024-10-01 15:58:53.931389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.499 [2024-10-01 15:58:53.943782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.499 [2024-10-01 15:58:53.943960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.499 [2024-10-01 15:58:53.943976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.499 [2024-10-01 15:58:53.943983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.499 [2024-10-01 15:58:53.943996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.499 [2024-10-01 15:58:53.944007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.499 [2024-10-01 15:58:53.944013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.499 [2024-10-01 15:58:53.944020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.499 [2024-10-01 15:58:53.944033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.499 [2024-10-01 15:58:53.954479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.499 [2024-10-01 15:58:53.954673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.499 [2024-10-01 15:58:53.954688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.499 [2024-10-01 15:58:53.954699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.499 [2024-10-01 15:58:53.954711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.499 [2024-10-01 15:58:53.954721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.499 [2024-10-01 15:58:53.954728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.499 [2024-10-01 15:58:53.954734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.499 [2024-10-01 15:58:53.954747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.499 [2024-10-01 15:58:53.966301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.499 11224.50 IOPS, 43.85 MiB/s [2024-10-01 15:58:53.967846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.499 [2024-10-01 15:58:53.967867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.499 [2024-10-01 15:58:53.967875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.499 [2024-10-01 15:58:53.968836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.499 [2024-10-01 15:58:53.969537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.499 [2024-10-01 15:58:53.969550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.499 [2024-10-01 15:58:53.969556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.499 [2024-10-01 15:58:53.969751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.499 [2024-10-01 15:58:53.977889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.499 [2024-10-01 15:58:53.978080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.499 [2024-10-01 15:58:53.978093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.499 [2024-10-01 15:58:53.978101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.499 [2024-10-01 15:58:53.978112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.499 [2024-10-01 15:58:53.978123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.499 [2024-10-01 15:58:53.978129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.499 [2024-10-01 15:58:53.978136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.499 [2024-10-01 15:58:53.979014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.499 [2024-10-01 15:58:53.990075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.499 [2024-10-01 15:58:53.990421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.499 [2024-10-01 15:58:53.990439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.499 [2024-10-01 15:58:53.990447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.499 [2024-10-01 15:58:53.990591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.499 [2024-10-01 15:58:53.990628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.499 [2024-10-01 15:58:53.990641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.499 [2024-10-01 15:58:53.990647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.499 [2024-10-01 15:58:53.990661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.499 [2024-10-01 15:58:54.001002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.499 [2024-10-01 15:58:54.001360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.499 [2024-10-01 15:58:54.001378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.499 [2024-10-01 15:58:54.001386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.499 [2024-10-01 15:58:54.001528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.499 [2024-10-01 15:58:54.001558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.499 [2024-10-01 15:58:54.001565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.499 [2024-10-01 15:58:54.001571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.499 [2024-10-01 15:58:54.001586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.499 [2024-10-01 15:58:54.013043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.499 [2024-10-01 15:58:54.013262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.499 [2024-10-01 15:58:54.013283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.499 [2024-10-01 15:58:54.013290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.499 [2024-10-01 15:58:54.013302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.499 [2024-10-01 15:58:54.013313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.499 [2024-10-01 15:58:54.013320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.499 [2024-10-01 15:58:54.013326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.499 [2024-10-01 15:58:54.013339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.499 [2024-10-01 15:58:54.024390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.499 [2024-10-01 15:58:54.024623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.499 [2024-10-01 15:58:54.024639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.499 [2024-10-01 15:58:54.024647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.499 [2024-10-01 15:58:54.024659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.499 [2024-10-01 15:58:54.024669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.499 [2024-10-01 15:58:54.024676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.499 [2024-10-01 15:58:54.024682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.499 [2024-10-01 15:58:54.024695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.500 [2024-10-01 15:58:54.036017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.500 [2024-10-01 15:58:54.036440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.500 [2024-10-01 15:58:54.036458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.500 [2024-10-01 15:58:54.036466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.500 [2024-10-01 15:58:54.036612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.500 [2024-10-01 15:58:54.036642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.500 [2024-10-01 15:58:54.036649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.500 [2024-10-01 15:58:54.036656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.500 [2024-10-01 15:58:54.036671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.500 [2024-10-01 15:58:54.048272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.500 [2024-10-01 15:58:54.048521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.500 [2024-10-01 15:58:54.048536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.500 [2024-10-01 15:58:54.048544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.500 [2024-10-01 15:58:54.048556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.500 [2024-10-01 15:58:54.048567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.500 [2024-10-01 15:58:54.048573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.500 [2024-10-01 15:58:54.048579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.500 [2024-10-01 15:58:54.048592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.500 [2024-10-01 15:58:54.060658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.500 [2024-10-01 15:58:54.061018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.500 [2024-10-01 15:58:54.061037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.500 [2024-10-01 15:58:54.061044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.500 [2024-10-01 15:58:54.061393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.500 [2024-10-01 15:58:54.061561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.500 [2024-10-01 15:58:54.061573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.500 [2024-10-01 15:58:54.061580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.500 [2024-10-01 15:58:54.061611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.500 [2024-10-01 15:58:54.071910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.500 [2024-10-01 15:58:54.072136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.500 [2024-10-01 15:58:54.072152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.500 [2024-10-01 15:58:54.072159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.500 [2024-10-01 15:58:54.072175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.500 [2024-10-01 15:58:54.072185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.500 [2024-10-01 15:58:54.072191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.500 [2024-10-01 15:58:54.072198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.500 [2024-10-01 15:58:54.072211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.500 [2024-10-01 15:58:54.084201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.500 [2024-10-01 15:58:54.084575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.500 [2024-10-01 15:58:54.084593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.500 [2024-10-01 15:58:54.084601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.500 [2024-10-01 15:58:54.084803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.500 [2024-10-01 15:58:54.084839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.500 [2024-10-01 15:58:54.084846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.500 [2024-10-01 15:58:54.084853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.500 [2024-10-01 15:58:54.084987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.500 [2024-10-01 15:58:54.095417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.500 [2024-10-01 15:58:54.095640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.500 [2024-10-01 15:58:54.095655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.500 [2024-10-01 15:58:54.095663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.500 [2024-10-01 15:58:54.096006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.500 [2024-10-01 15:58:54.096168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.500 [2024-10-01 15:58:54.096178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.500 [2024-10-01 15:58:54.096185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.500 [2024-10-01 15:58:54.096216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.500 [2024-10-01 15:58:54.106827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.500 [2024-10-01 15:58:54.107095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.500 [2024-10-01 15:58:54.107111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.500 [2024-10-01 15:58:54.107119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.500 [2024-10-01 15:58:54.107131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.500 [2024-10-01 15:58:54.107142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.500 [2024-10-01 15:58:54.107148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.500 [2024-10-01 15:58:54.107158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.500 [2024-10-01 15:58:54.107171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.500 [2024-10-01 15:58:54.117562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.500 [2024-10-01 15:58:54.117800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.500 [2024-10-01 15:58:54.117815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.500 [2024-10-01 15:58:54.117823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.500 [2024-10-01 15:58:54.117835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.500 [2024-10-01 15:58:54.117846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.500 [2024-10-01 15:58:54.117852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.500 [2024-10-01 15:58:54.117858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.500 [2024-10-01 15:58:54.117878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.500 [2024-10-01 15:58:54.129674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.500 [2024-10-01 15:58:54.129918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.500 [2024-10-01 15:58:54.129934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.500 [2024-10-01 15:58:54.129942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.500 [2024-10-01 15:58:54.129954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.500 [2024-10-01 15:58:54.129972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.500 [2024-10-01 15:58:54.129979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.500 [2024-10-01 15:58:54.129986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.500 [2024-10-01 15:58:54.129999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.500 [2024-10-01 15:58:54.141867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.500 [2024-10-01 15:58:54.142120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.500 [2024-10-01 15:58:54.142136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.500 [2024-10-01 15:58:54.142143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.500 [2024-10-01 15:58:54.142156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.500 [2024-10-01 15:58:54.142166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.500 [2024-10-01 15:58:54.142172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.501 [2024-10-01 15:58:54.142178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.501 [2024-10-01 15:58:54.142192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.501 [2024-10-01 15:58:54.153765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.501 [2024-10-01 15:58:54.154070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.501 [2024-10-01 15:58:54.154092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.501 [2024-10-01 15:58:54.154100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.501 [2024-10-01 15:58:54.154129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.501 [2024-10-01 15:58:54.154141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.501 [2024-10-01 15:58:54.154147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.501 [2024-10-01 15:58:54.154153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.501 [2024-10-01 15:58:54.154166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.501 [2024-10-01 15:58:54.164459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.501 [2024-10-01 15:58:54.164712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.501 [2024-10-01 15:58:54.164727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.501 [2024-10-01 15:58:54.164735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.501 [2024-10-01 15:58:54.164747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.501 [2024-10-01 15:58:54.164758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.501 [2024-10-01 15:58:54.164764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.501 [2024-10-01 15:58:54.164770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.501 [2024-10-01 15:58:54.164783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.501 [2024-10-01 15:58:54.177240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.501 [2024-10-01 15:58:54.177560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.501 [2024-10-01 15:58:54.177579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.501 [2024-10-01 15:58:54.177586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.501 [2024-10-01 15:58:54.177760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.501 [2024-10-01 15:58:54.177793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.501 [2024-10-01 15:58:54.177801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.501 [2024-10-01 15:58:54.177807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.501 [2024-10-01 15:58:54.177821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.501 [2024-10-01 15:58:54.188833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.501 [2024-10-01 15:58:54.189149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.501 [2024-10-01 15:58:54.189167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.501 [2024-10-01 15:58:54.189175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.501 [2024-10-01 15:58:54.189316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.501 [2024-10-01 15:58:54.189361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.501 [2024-10-01 15:58:54.189371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.501 [2024-10-01 15:58:54.189380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.501 [2024-10-01 15:58:54.189395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.501 [2024-10-01 15:58:54.199453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.501 [2024-10-01 15:58:54.199610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.501 [2024-10-01 15:58:54.199625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.501 [2024-10-01 15:58:54.199633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.501 [2024-10-01 15:58:54.199645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.501 [2024-10-01 15:58:54.199656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.501 [2024-10-01 15:58:54.199663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.501 [2024-10-01 15:58:54.199670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.501 [2024-10-01 15:58:54.199683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.501 [2024-10-01 15:58:54.209657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.501 [2024-10-01 15:58:54.211949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.501 [2024-10-01 15:58:54.211971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.501 [2024-10-01 15:58:54.211979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.501 [2024-10-01 15:58:54.212583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.501 [2024-10-01 15:58:54.212932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.501 [2024-10-01 15:58:54.212944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.501 [2024-10-01 15:58:54.212952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.501 [2024-10-01 15:58:54.213105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.501 [2024-10-01 15:58:54.223608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.501 [2024-10-01 15:58:54.223762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.501 [2024-10-01 15:58:54.223776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.501 [2024-10-01 15:58:54.223784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.501 [2024-10-01 15:58:54.223796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.501 [2024-10-01 15:58:54.223807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.501 [2024-10-01 15:58:54.223813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.501 [2024-10-01 15:58:54.223820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.501 [2024-10-01 15:58:54.223836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.501 [2024-10-01 15:58:54.234681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.501 [2024-10-01 15:58:54.234932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.501 [2024-10-01 15:58:54.234949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.501 [2024-10-01 15:58:54.234957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.501 [2024-10-01 15:58:54.234971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.501 [2024-10-01 15:58:54.234982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.501 [2024-10-01 15:58:54.234989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.501 [2024-10-01 15:58:54.234995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.501 [2024-10-01 15:58:54.235008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.501 [2024-10-01 15:58:54.246823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.501 [2024-10-01 15:58:54.247172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.501 [2024-10-01 15:58:54.247190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.501 [2024-10-01 15:58:54.247198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.501 [2024-10-01 15:58:54.247212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.501 [2024-10-01 15:58:54.247223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.501 [2024-10-01 15:58:54.247229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.501 [2024-10-01 15:58:54.247236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.501 [2024-10-01 15:58:54.247249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.501 [2024-10-01 15:58:54.256898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.501 [2024-10-01 15:58:54.257098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.501 [2024-10-01 15:58:54.257118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.501 [2024-10-01 15:58:54.257125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.501 [2024-10-01 15:58:54.257492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.501 [2024-10-01 15:58:54.257540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.501 [2024-10-01 15:58:54.257548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.501 [2024-10-01 15:58:54.257555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.501 [2024-10-01 15:58:54.257569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.501 [2024-10-01 15:58:54.266964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.502 [2024-10-01 15:58:54.267152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.502 [2024-10-01 15:58:54.267167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.502 [2024-10-01 15:58:54.267177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.502 [2024-10-01 15:58:54.268033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.502 [2024-10-01 15:58:54.268732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.502 [2024-10-01 15:58:54.268745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.502 [2024-10-01 15:58:54.268752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.502 [2024-10-01 15:58:54.269473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.502 [2024-10-01 15:58:54.277031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.502 [2024-10-01 15:58:54.277276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.502 [2024-10-01 15:58:54.277292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.502 [2024-10-01 15:58:54.277299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.502 [2024-10-01 15:58:54.278499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.502 [2024-10-01 15:58:54.278753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.502 [2024-10-01 15:58:54.278766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.502 [2024-10-01 15:58:54.278773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.502 [2024-10-01 15:58:54.278876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.502 [2024-10-01 15:58:54.287802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.502 [2024-10-01 15:58:54.287992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.502 [2024-10-01 15:58:54.288007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.502 [2024-10-01 15:58:54.288015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.502 [2024-10-01 15:58:54.288027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.502 [2024-10-01 15:58:54.288037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.502 [2024-10-01 15:58:54.288043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.502 [2024-10-01 15:58:54.288050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.502 [2024-10-01 15:58:54.288063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.502 [2024-10-01 15:58:54.298546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.502 [2024-10-01 15:58:54.298820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.502 [2024-10-01 15:58:54.298835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.502 [2024-10-01 15:58:54.298843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.502 [2024-10-01 15:58:54.298855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.502 [2024-10-01 15:58:54.298871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.502 [2024-10-01 15:58:54.298881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.502 [2024-10-01 15:58:54.298887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.502 [2024-10-01 15:58:54.298901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.502 [2024-10-01 15:58:54.308611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.502 [2024-10-01 15:58:54.308776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.502 [2024-10-01 15:58:54.308792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.502 [2024-10-01 15:58:54.308799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.502 [2024-10-01 15:58:54.308811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.502 [2024-10-01 15:58:54.308822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.502 [2024-10-01 15:58:54.308828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.502 [2024-10-01 15:58:54.308835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.502 [2024-10-01 15:58:54.308848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.502 [2024-10-01 15:58:54.318677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.502 [2024-10-01 15:58:54.319348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.502 [2024-10-01 15:58:54.319367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.502 [2024-10-01 15:58:54.319375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.502 [2024-10-01 15:58:54.320018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.502 [2024-10-01 15:58:54.320638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.502 [2024-10-01 15:58:54.320651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.502 [2024-10-01 15:58:54.320657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.502 [2024-10-01 15:58:54.320822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.502 [2024-10-01 15:58:54.328744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.502 [2024-10-01 15:58:54.328940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.502 [2024-10-01 15:58:54.328956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.502 [2024-10-01 15:58:54.328963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.502 [2024-10-01 15:58:54.328975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.502 [2024-10-01 15:58:54.328986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.502 [2024-10-01 15:58:54.328993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.502 [2024-10-01 15:58:54.328999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.502 [2024-10-01 15:58:54.329012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.502 [2024-10-01 15:58:54.341729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.502 [2024-10-01 15:58:54.342133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.502 [2024-10-01 15:58:54.342152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.502 [2024-10-01 15:58:54.342160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.502 [2024-10-01 15:58:54.342303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.502 [2024-10-01 15:58:54.342333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.502 [2024-10-01 15:58:54.342340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.502 [2024-10-01 15:58:54.342347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.502 [2024-10-01 15:58:54.342362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.502 [2024-10-01 15:58:54.352127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.502 [2024-10-01 15:58:54.352377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.502 [2024-10-01 15:58:54.352392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.502 [2024-10-01 15:58:54.352400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.502 [2024-10-01 15:58:54.352529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.502 [2024-10-01 15:58:54.352672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.502 [2024-10-01 15:58:54.352682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.502 [2024-10-01 15:58:54.352689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.502 [2024-10-01 15:58:54.352718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.502 [2024-10-01 15:58:54.363880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.502 [2024-10-01 15:58:54.364057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.502 [2024-10-01 15:58:54.364071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.502 [2024-10-01 15:58:54.364079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.502 [2024-10-01 15:58:54.364091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.502 [2024-10-01 15:58:54.364102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.502 [2024-10-01 15:58:54.364108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.502 [2024-10-01 15:58:54.364115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.502 [2024-10-01 15:58:54.364128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.502 [2024-10-01 15:58:54.376702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.502 [2024-10-01 15:58:54.376947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.502 [2024-10-01 15:58:54.376964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.502 [2024-10-01 15:58:54.376972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.502 [2024-10-01 15:58:54.376987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.502 [2024-10-01 15:58:54.376998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.502 [2024-10-01 15:58:54.377004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.502 [2024-10-01 15:58:54.377010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.502 [2024-10-01 15:58:54.377024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.502 [2024-10-01 15:58:54.388682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.502 [2024-10-01 15:58:54.389052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.503 [2024-10-01 15:58:54.389071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.503 [2024-10-01 15:58:54.389079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.503 [2024-10-01 15:58:54.389226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.503 [2024-10-01 15:58:54.389265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.503 [2024-10-01 15:58:54.389273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.503 [2024-10-01 15:58:54.389280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.503 [2024-10-01 15:58:54.389293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.503 [2024-10-01 15:58:54.400171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.503 [2024-10-01 15:58:54.400573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.503 [2024-10-01 15:58:54.400592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.503 [2024-10-01 15:58:54.400600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.503 [2024-10-01 15:58:54.400741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.503 [2024-10-01 15:58:54.400771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.503 [2024-10-01 15:58:54.400778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.503 [2024-10-01 15:58:54.400785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.503 [2024-10-01 15:58:54.400799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.503 [2024-10-01 15:58:54.410598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.503 [2024-10-01 15:58:54.410823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.503 [2024-10-01 15:58:54.410839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.503 [2024-10-01 15:58:54.410847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.503 [2024-10-01 15:58:54.410859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.503 [2024-10-01 15:58:54.410875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.503 [2024-10-01 15:58:54.410882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.503 [2024-10-01 15:58:54.410892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.503 [2024-10-01 15:58:54.410905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.503 [2024-10-01 15:58:54.423372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.503 [2024-10-01 15:58:54.423775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.503 [2024-10-01 15:58:54.423794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.503 [2024-10-01 15:58:54.423802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.503 [2024-10-01 15:58:54.423954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.503 [2024-10-01 15:58:54.423989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.503 [2024-10-01 15:58:54.423996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.503 [2024-10-01 15:58:54.424003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.503 [2024-10-01 15:58:54.424017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.503 [2024-10-01 15:58:54.434484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.503 [2024-10-01 15:58:54.434874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.503 [2024-10-01 15:58:54.434892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.503 [2024-10-01 15:58:54.434900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.503 [2024-10-01 15:58:54.435041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.503 [2024-10-01 15:58:54.435071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.503 [2024-10-01 15:58:54.435078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.503 [2024-10-01 15:58:54.435085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.503 [2024-10-01 15:58:54.435212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.503 [2024-10-01 15:58:54.445493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.503 [2024-10-01 15:58:54.445738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.503 [2024-10-01 15:58:54.445754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.503 [2024-10-01 15:58:54.445761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.503 [2024-10-01 15:58:54.445774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.503 [2024-10-01 15:58:54.445784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.503 [2024-10-01 15:58:54.445791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.503 [2024-10-01 15:58:54.445797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.503 [2024-10-01 15:58:54.445810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.503 [2024-10-01 15:58:54.458883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.503 [2024-10-01 15:58:54.459279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.503 [2024-10-01 15:58:54.459301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.503 [2024-10-01 15:58:54.459309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.503 [2024-10-01 15:58:54.459452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.503 [2024-10-01 15:58:54.459602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.503 [2024-10-01 15:58:54.459612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.503 [2024-10-01 15:58:54.459619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.503 [2024-10-01 15:58:54.459649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.503 [2024-10-01 15:58:54.469925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.503 [2024-10-01 15:58:54.470302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.503 [2024-10-01 15:58:54.470320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.503 [2024-10-01 15:58:54.470328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.503 [2024-10-01 15:58:54.470471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.503 [2024-10-01 15:58:54.470499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.503 [2024-10-01 15:58:54.470507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.503 [2024-10-01 15:58:54.470513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.503 [2024-10-01 15:58:54.470527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.503 [2024-10-01 15:58:54.481595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.503 [2024-10-01 15:58:54.481842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.503 [2024-10-01 15:58:54.481857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.503 [2024-10-01 15:58:54.481871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.503 [2024-10-01 15:58:54.481883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.503 [2024-10-01 15:58:54.481894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.503 [2024-10-01 15:58:54.481901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.503 [2024-10-01 15:58:54.481907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.503 [2024-10-01 15:58:54.481921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.503 [2024-10-01 15:58:54.494021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.503 [2024-10-01 15:58:54.494178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.503 [2024-10-01 15:58:54.494192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.503 [2024-10-01 15:58:54.494200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.503 [2024-10-01 15:58:54.494211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.503 [2024-10-01 15:58:54.494225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.503 [2024-10-01 15:58:54.494232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.503 [2024-10-01 15:58:54.494238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.503 [2024-10-01 15:58:54.494251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.503 [2024-10-01 15:58:54.505783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.503 [2024-10-01 15:58:54.506050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.503 [2024-10-01 15:58:54.506067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.503 [2024-10-01 15:58:54.506075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.503 [2024-10-01 15:58:54.506087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.503 [2024-10-01 15:58:54.506098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.503 [2024-10-01 15:58:54.506104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.503 [2024-10-01 15:58:54.506110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.503 [2024-10-01 15:58:54.506124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.503 [2024-10-01 15:58:54.517857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.503 [2024-10-01 15:58:54.518035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.503 [2024-10-01 15:58:54.518049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.504 [2024-10-01 15:58:54.518056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.504 [2024-10-01 15:58:54.518068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.504 [2024-10-01 15:58:54.518079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.504 [2024-10-01 15:58:54.518085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.504 [2024-10-01 15:58:54.518092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.504 [2024-10-01 15:58:54.518104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.504 [2024-10-01 15:58:54.530133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.504 [2024-10-01 15:58:54.530541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.504 [2024-10-01 15:58:54.530560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.504 [2024-10-01 15:58:54.530568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.504 [2024-10-01 15:58:54.530712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.504 [2024-10-01 15:58:54.530868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.504 [2024-10-01 15:58:54.530879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.504 [2024-10-01 15:58:54.530886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.504 [2024-10-01 15:58:54.530921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.504 [2024-10-01 15:58:54.540912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.504 [2024-10-01 15:58:54.541153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.504 [2024-10-01 15:58:54.541169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.504 [2024-10-01 15:58:54.541176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.504 [2024-10-01 15:58:54.541189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.504 [2024-10-01 15:58:54.541200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.504 [2024-10-01 15:58:54.541206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.504 [2024-10-01 15:58:54.541213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.504 [2024-10-01 15:58:54.541226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.504 [2024-10-01 15:58:54.552928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.504 [2024-10-01 15:58:54.553177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.504 [2024-10-01 15:58:54.553193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.504 [2024-10-01 15:58:54.553200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.504 [2024-10-01 15:58:54.553213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.504 [2024-10-01 15:58:54.553224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.504 [2024-10-01 15:58:54.553230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.504 [2024-10-01 15:58:54.553237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.504 [2024-10-01 15:58:54.553250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.504 [2024-10-01 15:58:54.564905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.504 [2024-10-01 15:58:54.565268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.504 [2024-10-01 15:58:54.565287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.504 [2024-10-01 15:58:54.565295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.504 [2024-10-01 15:58:54.565437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.504 [2024-10-01 15:58:54.565466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.504 [2024-10-01 15:58:54.565473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.504 [2024-10-01 15:58:54.565480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.504 [2024-10-01 15:58:54.565494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.504 [2024-10-01 15:58:54.576651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.504 [2024-10-01 15:58:54.576848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.504 [2024-10-01 15:58:54.576868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.504 [2024-10-01 15:58:54.576879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.504 [2024-10-01 15:58:54.576891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.504 [2024-10-01 15:58:54.576902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.504 [2024-10-01 15:58:54.576908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.504 [2024-10-01 15:58:54.576914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.504 [2024-10-01 15:58:54.576927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.504 [2024-10-01 15:58:54.588820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.504 [2024-10-01 15:58:54.589007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.504 [2024-10-01 15:58:54.589022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.504 [2024-10-01 15:58:54.589030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.504 [2024-10-01 15:58:54.589042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.504 [2024-10-01 15:58:54.589053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.504 [2024-10-01 15:58:54.589059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.504 [2024-10-01 15:58:54.589065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.504 [2024-10-01 15:58:54.589078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.504 [2024-10-01 15:58:54.601057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.504 [2024-10-01 15:58:54.601384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.504 [2024-10-01 15:58:54.601402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.504 [2024-10-01 15:58:54.601410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.504 [2024-10-01 15:58:54.601584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.504 [2024-10-01 15:58:54.601617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.504 [2024-10-01 15:58:54.601625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.504 [2024-10-01 15:58:54.601631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.504 [2024-10-01 15:58:54.601645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.504 [2024-10-01 15:58:54.611753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.504 [2024-10-01 15:58:54.612074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.504 [2024-10-01 15:58:54.612092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.504 [2024-10-01 15:58:54.612101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.504 [2024-10-01 15:58:54.612243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.504 [2024-10-01 15:58:54.612273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.504 [2024-10-01 15:58:54.612283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.504 [2024-10-01 15:58:54.612290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.504 [2024-10-01 15:58:54.612305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.504 [2024-10-01 15:58:54.623227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.504 [2024-10-01 15:58:54.623401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.504 [2024-10-01 15:58:54.623415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.504 [2024-10-01 15:58:54.623423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.504 [2024-10-01 15:58:54.623434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.504 [2024-10-01 15:58:54.623446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.504 [2024-10-01 15:58:54.623452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.504 [2024-10-01 15:58:54.623458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.504 [2024-10-01 15:58:54.623472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.504 [2024-10-01 15:58:54.635845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.504 [2024-10-01 15:58:54.636033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.504 [2024-10-01 15:58:54.636049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.504 [2024-10-01 15:58:54.636056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.504 [2024-10-01 15:58:54.636069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.504 [2024-10-01 15:58:54.636080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.504 [2024-10-01 15:58:54.636086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.504 [2024-10-01 15:58:54.636093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.504 [2024-10-01 15:58:54.636106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.504 [2024-10-01 15:58:54.648225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.504 [2024-10-01 15:58:54.648353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.504 [2024-10-01 15:58:54.648368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.505 [2024-10-01 15:58:54.648375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.505 [2024-10-01 15:58:54.648387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.505 [2024-10-01 15:58:54.648398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.505 [2024-10-01 15:58:54.648404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.505 [2024-10-01 15:58:54.648410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.505 [2024-10-01 15:58:54.648423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.505 [2024-10-01 15:58:54.660616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.505 [2024-10-01 15:58:54.660935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.505 [2024-10-01 15:58:54.660954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.505 [2024-10-01 15:58:54.660962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.505 [2024-10-01 15:58:54.661105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.505 [2024-10-01 15:58:54.661134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.505 [2024-10-01 15:58:54.661141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.505 [2024-10-01 15:58:54.661147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.505 [2024-10-01 15:58:54.661162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.505 [2024-10-01 15:58:54.671998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.505 [2024-10-01 15:58:54.672122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.505 [2024-10-01 15:58:54.672137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.505 [2024-10-01 15:58:54.672145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.505 [2024-10-01 15:58:54.672156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.505 [2024-10-01 15:58:54.672167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.505 [2024-10-01 15:58:54.672173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.505 [2024-10-01 15:58:54.672179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.505 [2024-10-01 15:58:54.672192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.505 [2024-10-01 15:58:54.682760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.505 [2024-10-01 15:58:54.682988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.505 [2024-10-01 15:58:54.683005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.505 [2024-10-01 15:58:54.683012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.505 [2024-10-01 15:58:54.683024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.505 [2024-10-01 15:58:54.683035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.505 [2024-10-01 15:58:54.683041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.505 [2024-10-01 15:58:54.683048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.505 [2024-10-01 15:58:54.683061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.505 [2024-10-01 15:58:54.694086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.505 [2024-10-01 15:58:54.694260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.505 [2024-10-01 15:58:54.694274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.505 [2024-10-01 15:58:54.694281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.505 [2024-10-01 15:58:54.694297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.505 [2024-10-01 15:58:54.694308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.505 [2024-10-01 15:58:54.694314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.505 [2024-10-01 15:58:54.694321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.505 [2024-10-01 15:58:54.694334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.505 [2024-10-01 15:58:54.705682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.505 [2024-10-01 15:58:54.706055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.505 [2024-10-01 15:58:54.706075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.505 [2024-10-01 15:58:54.706083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.505 [2024-10-01 15:58:54.706172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.505 [2024-10-01 15:58:54.706186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.505 [2024-10-01 15:58:54.706192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.505 [2024-10-01 15:58:54.706198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.505 [2024-10-01 15:58:54.706212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.505 [2024-10-01 15:58:54.717378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.505 [2024-10-01 15:58:54.717732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.505 [2024-10-01 15:58:54.717750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.505 [2024-10-01 15:58:54.717758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.505 [2024-10-01 15:58:54.717778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.505 [2024-10-01 15:58:54.717790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.505 [2024-10-01 15:58:54.717797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.505 [2024-10-01 15:58:54.717803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.505 [2024-10-01 15:58:54.717816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.505 [2024-10-01 15:58:54.728106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.505 [2024-10-01 15:58:54.728725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.505 [2024-10-01 15:58:54.728745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.505 [2024-10-01 15:58:54.728753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.505 [2024-10-01 15:58:54.728916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.505 [2024-10-01 15:58:54.728947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.505 [2024-10-01 15:58:54.728955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.505 [2024-10-01 15:58:54.728965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.505 [2024-10-01 15:58:54.728980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.505 [2024-10-01 15:58:54.739236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.505 [2024-10-01 15:58:54.739596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.505 [2024-10-01 15:58:54.739615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.505 [2024-10-01 15:58:54.739622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.505 [2024-10-01 15:58:54.739763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.505 [2024-10-01 15:58:54.739792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.505 [2024-10-01 15:58:54.739800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.505 [2024-10-01 15:58:54.739807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.505 [2024-10-01 15:58:54.739821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.505 [2024-10-01 15:58:54.750228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.505 [2024-10-01 15:58:54.750499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.505 [2024-10-01 15:58:54.750516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.505 [2024-10-01 15:58:54.750524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.505 [2024-10-01 15:58:54.750553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.505 [2024-10-01 15:58:54.750565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.505 [2024-10-01 15:58:54.750572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.505 [2024-10-01 15:58:54.750578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.506 [2024-10-01 15:58:54.750592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.506 [2024-10-01 15:58:54.761068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.506 [2024-10-01 15:58:54.761247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.506 [2024-10-01 15:58:54.761262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.506 [2024-10-01 15:58:54.761269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.506 [2024-10-01 15:58:54.761281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.506 [2024-10-01 15:58:54.761292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.506 [2024-10-01 15:58:54.761299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.506 [2024-10-01 15:58:54.761305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.506 [2024-10-01 15:58:54.761318] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.506 [2024-10-01 15:58:54.772325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.506 [2024-10-01 15:58:54.772472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.506 [2024-10-01 15:58:54.772493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.506 [2024-10-01 15:58:54.772500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.506 [2024-10-01 15:58:54.772512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.506 [2024-10-01 15:58:54.772523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.506 [2024-10-01 15:58:54.772529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.506 [2024-10-01 15:58:54.772535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.506 [2024-10-01 15:58:54.772549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.506 [2024-10-01 15:58:54.784003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.506 [2024-10-01 15:58:54.784257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.506 [2024-10-01 15:58:54.784273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.506 [2024-10-01 15:58:54.784280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.506 [2024-10-01 15:58:54.784292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.506 [2024-10-01 15:58:54.784303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.506 [2024-10-01 15:58:54.784309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.506 [2024-10-01 15:58:54.784316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.506 [2024-10-01 15:58:54.784328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.506 [2024-10-01 15:58:54.797351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.506 [2024-10-01 15:58:54.797706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.506 [2024-10-01 15:58:54.797724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.506 [2024-10-01 15:58:54.797732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.506 [2024-10-01 15:58:54.797879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.506 [2024-10-01 15:58:54.798022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.506 [2024-10-01 15:58:54.798031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.506 [2024-10-01 15:58:54.798038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.506 [2024-10-01 15:58:54.798068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.506 [2024-10-01 15:58:54.808198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.506 [2024-10-01 15:58:54.808342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.506 [2024-10-01 15:58:54.808357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.506 [2024-10-01 15:58:54.808365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.506 [2024-10-01 15:58:54.808376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.506 [2024-10-01 15:58:54.808391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.506 [2024-10-01 15:58:54.808397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.506 [2024-10-01 15:58:54.808403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.506 [2024-10-01 15:58:54.808417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.506 [2024-10-01 15:58:54.819462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.506 [2024-10-01 15:58:54.819718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.506 [2024-10-01 15:58:54.819735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.506 [2024-10-01 15:58:54.819743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.506 [2024-10-01 15:58:54.819756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.506 [2024-10-01 15:58:54.819766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.506 [2024-10-01 15:58:54.819773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.506 [2024-10-01 15:58:54.819779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.506 [2024-10-01 15:58:54.819792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.506 [2024-10-01 15:58:54.829688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.506 [2024-10-01 15:58:54.829799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.506 [2024-10-01 15:58:54.829814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.506 [2024-10-01 15:58:54.829822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.506 [2024-10-01 15:58:54.829834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.506 [2024-10-01 15:58:54.829846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.506 [2024-10-01 15:58:54.829852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.506 [2024-10-01 15:58:54.829859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.506 [2024-10-01 15:58:54.829879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.506 [2024-10-01 15:58:54.841800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.506 [2024-10-01 15:58:54.841983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.506 [2024-10-01 15:58:54.841999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.506 [2024-10-01 15:58:54.842007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.506 [2024-10-01 15:58:54.842019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.506 [2024-10-01 15:58:54.842030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.506 [2024-10-01 15:58:54.842036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.506 [2024-10-01 15:58:54.842043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.506 [2024-10-01 15:58:54.842060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.506 [2024-10-01 15:58:54.852155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.506 [2024-10-01 15:58:54.852280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.506 [2024-10-01 15:58:54.852295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.506 [2024-10-01 15:58:54.852303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.506 [2024-10-01 15:58:54.852315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.506 [2024-10-01 15:58:54.852326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.506 [2024-10-01 15:58:54.852332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.506 [2024-10-01 15:58:54.852338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.506 [2024-10-01 15:58:54.852352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.506 [2024-10-01 15:58:54.863770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.506 [2024-10-01 15:58:54.863924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.506 [2024-10-01 15:58:54.863939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.506 [2024-10-01 15:58:54.863947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.506 [2024-10-01 15:58:54.863960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.506 [2024-10-01 15:58:54.863970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.506 [2024-10-01 15:58:54.863977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.506 [2024-10-01 15:58:54.863983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.506 [2024-10-01 15:58:54.863996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.506 [2024-10-01 15:58:54.874693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.506 [2024-10-01 15:58:54.874961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.506 [2024-10-01 15:58:54.874978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.506 [2024-10-01 15:58:54.874986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.506 [2024-10-01 15:58:54.874998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.506 [2024-10-01 15:58:54.875009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.506 [2024-10-01 15:58:54.875015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.506 [2024-10-01 15:58:54.875022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.506 [2024-10-01 15:58:54.875035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.506 [2024-10-01 15:58:54.886676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.507 [2024-10-01 15:58:54.887016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.507 [2024-10-01 15:58:54.887034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.507 [2024-10-01 15:58:54.887046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.507 [2024-10-01 15:58:54.887327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.507 [2024-10-01 15:58:54.887360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.507 [2024-10-01 15:58:54.887368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.507 [2024-10-01 15:58:54.887375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.507 [2024-10-01 15:58:54.887389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.507 [2024-10-01 15:58:54.898713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.507 [2024-10-01 15:58:54.899108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.507 [2024-10-01 15:58:54.899127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.507 [2024-10-01 15:58:54.899135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.507 [2024-10-01 15:58:54.899287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.507 [2024-10-01 15:58:54.899316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.507 [2024-10-01 15:58:54.899323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.507 [2024-10-01 15:58:54.899330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.507 [2024-10-01 15:58:54.899344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.507 [2024-10-01 15:58:54.909005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.507 [2024-10-01 15:58:54.909179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.507 [2024-10-01 15:58:54.909193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.507 [2024-10-01 15:58:54.909201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.507 [2024-10-01 15:58:54.909213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.507 [2024-10-01 15:58:54.909224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.507 [2024-10-01 15:58:54.909230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.507 [2024-10-01 15:58:54.909237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.507 [2024-10-01 15:58:54.909249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.507 [2024-10-01 15:58:54.920252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.507 [2024-10-01 15:58:54.920450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.507 [2024-10-01 15:58:54.920472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.507 [2024-10-01 15:58:54.920479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.507 [2024-10-01 15:58:54.920609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.507 [2024-10-01 15:58:54.920638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.507 [2024-10-01 15:58:54.920649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.507 [2024-10-01 15:58:54.920656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.507 [2024-10-01 15:58:54.920669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.507 [2024-10-01 15:58:54.930982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.507 [2024-10-01 15:58:54.931110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.507 [2024-10-01 15:58:54.931124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.507 [2024-10-01 15:58:54.931131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.507 [2024-10-01 15:58:54.931143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.507 [2024-10-01 15:58:54.931154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.507 [2024-10-01 15:58:54.931160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.507 [2024-10-01 15:58:54.931166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.507 [2024-10-01 15:58:54.931180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.507 [2024-10-01 15:58:54.941887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.507 [2024-10-01 15:58:54.941981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.507 [2024-10-01 15:58:54.941996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.507 [2024-10-01 15:58:54.942003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.507 [2024-10-01 15:58:54.942015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.507 [2024-10-01 15:58:54.942026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.507 [2024-10-01 15:58:54.942032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.507 [2024-10-01 15:58:54.942039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.507 [2024-10-01 15:58:54.942052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.507 [2024-10-01 15:58:54.953013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.507 [2024-10-01 15:58:54.953159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.507 [2024-10-01 15:58:54.953175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.507 [2024-10-01 15:58:54.953183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.507 [2024-10-01 15:58:54.953318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.507 [2024-10-01 15:58:54.953348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.507 [2024-10-01 15:58:54.953356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.507 [2024-10-01 15:58:54.953362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.507 [2024-10-01 15:58:54.953376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.507 [2024-10-01 15:58:54.964141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.507 [2024-10-01 15:58:54.964264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.507 [2024-10-01 15:58:54.964279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.507 [2024-10-01 15:58:54.964287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.507 [2024-10-01 15:58:54.964299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.507 [2024-10-01 15:58:54.964309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.507 [2024-10-01 15:58:54.964315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.507 [2024-10-01 15:58:54.964322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.507 [2024-10-01 15:58:54.964335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.507 11244.33 IOPS, 43.92 MiB/s [2024-10-01 15:58:54.975839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.507 [2024-10-01 15:58:54.976039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.507 [2024-10-01 15:58:54.976055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.507 [2024-10-01 15:58:54.976064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.507 [2024-10-01 15:58:54.976242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.507 [2024-10-01 15:58:54.976345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.507 [2024-10-01 15:58:54.976356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.507 [2024-10-01 15:58:54.976363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.507 [2024-10-01 15:58:54.976386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.507 [2024-10-01 15:58:54.986379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.507 [2024-10-01 15:58:54.986652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.507 [2024-10-01 15:58:54.986669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.507 [2024-10-01 15:58:54.986677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.507 [2024-10-01 15:58:54.986698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.507 [2024-10-01 15:58:54.986709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.507 [2024-10-01 15:58:54.986716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.507 [2024-10-01 15:58:54.986722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.507 [2024-10-01 15:58:54.986737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.507 [2024-10-01 15:58:54.996447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.507 [2024-10-01 15:58:54.996687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.507 [2024-10-01 15:58:54.996703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.507 [2024-10-01 15:58:54.996711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.507 [2024-10-01 15:58:54.996845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.507 [2024-10-01 15:58:54.996881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.507 [2024-10-01 15:58:54.996889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.507 [2024-10-01 15:58:54.996895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.507 [2024-10-01 15:58:54.996909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.507 [2024-10-01 15:58:55.006588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.507 [2024-10-01 15:58:55.006831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.508 [2024-10-01 15:58:55.006847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.508 [2024-10-01 15:58:55.006854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.508 [2024-10-01 15:58:55.006872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.508 [2024-10-01 15:58:55.006883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.508 [2024-10-01 15:58:55.006890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.508 [2024-10-01 15:58:55.006896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.508 [2024-10-01 15:58:55.006909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.508 [2024-10-01 15:58:55.018071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.508 [2024-10-01 15:58:55.018402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.508 [2024-10-01 15:58:55.018420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.508 [2024-10-01 15:58:55.018428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.508 [2024-10-01 15:58:55.018570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.508 [2024-10-01 15:58:55.018599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.508 [2024-10-01 15:58:55.018607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.508 [2024-10-01 15:58:55.018614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.508 [2024-10-01 15:58:55.018627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.508 [2024-10-01 15:58:55.028730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.508 [2024-10-01 15:58:55.028906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.508 [2024-10-01 15:58:55.028922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.508 [2024-10-01 15:58:55.028930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.508 [2024-10-01 15:58:55.028942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.508 [2024-10-01 15:58:55.028953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.508 [2024-10-01 15:58:55.028959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.508 [2024-10-01 15:58:55.028970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.508 [2024-10-01 15:58:55.028984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.508 [2024-10-01 15:58:55.040055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.508 [2024-10-01 15:58:55.040255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.508 [2024-10-01 15:58:55.040271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.508 [2024-10-01 15:58:55.040279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.508 [2024-10-01 15:58:55.040291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.508 [2024-10-01 15:58:55.040302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.508 [2024-10-01 15:58:55.040309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.508 [2024-10-01 15:58:55.040315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.508 [2024-10-01 15:58:55.040328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.508 [2024-10-01 15:58:55.051024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.508 [2024-10-01 15:58:55.051195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.508 [2024-10-01 15:58:55.051210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.508 [2024-10-01 15:58:55.051217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.508 [2024-10-01 15:58:55.051229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.508 [2024-10-01 15:58:55.051239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.508 [2024-10-01 15:58:55.051246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.508 [2024-10-01 15:58:55.051253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.508 [2024-10-01 15:58:55.051266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.508 [2024-10-01 15:58:55.061528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.508 [2024-10-01 15:58:55.061770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.508 [2024-10-01 15:58:55.061785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.508 [2024-10-01 15:58:55.061792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.508 [2024-10-01 15:58:55.061805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.508 [2024-10-01 15:58:55.061816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.508 [2024-10-01 15:58:55.061822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.508 [2024-10-01 15:58:55.061828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.508 [2024-10-01 15:58:55.061841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.508 [2024-10-01 15:58:55.073391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.508 [2024-10-01 15:58:55.073643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.508 [2024-10-01 15:58:55.073657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.508 [2024-10-01 15:58:55.073665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.508 [2024-10-01 15:58:55.073677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.508 [2024-10-01 15:58:55.073688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.508 [2024-10-01 15:58:55.073694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.508 [2024-10-01 15:58:55.073701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.508 [2024-10-01 15:58:55.073714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.508 [2024-10-01 15:58:55.085086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.508 [2024-10-01 15:58:55.085279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.508 [2024-10-01 15:58:55.085294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.508 [2024-10-01 15:58:55.085302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.508 [2024-10-01 15:58:55.085313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.508 [2024-10-01 15:58:55.085324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.508 [2024-10-01 15:58:55.085330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.508 [2024-10-01 15:58:55.085336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.508 [2024-10-01 15:58:55.085349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.508 [2024-10-01 15:58:55.096037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.508 [2024-10-01 15:58:55.096216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.508 [2024-10-01 15:58:55.096232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.508 [2024-10-01 15:58:55.096240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.508 [2024-10-01 15:58:55.096575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.508 [2024-10-01 15:58:55.096733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.508 [2024-10-01 15:58:55.096743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.508 [2024-10-01 15:58:55.096750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.508 [2024-10-01 15:58:55.096781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.508 [2024-10-01 15:58:55.106896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.508 [2024-10-01 15:58:55.107082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.508 [2024-10-01 15:58:55.107097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.508 [2024-10-01 15:58:55.107104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.508 [2024-10-01 15:58:55.107234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.508 [2024-10-01 15:58:55.107268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.508 [2024-10-01 15:58:55.107276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.508 [2024-10-01 15:58:55.107282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.508 [2024-10-01 15:58:55.107296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.508 [2024-10-01 15:58:55.118039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.508 [2024-10-01 15:58:55.118156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.508 [2024-10-01 15:58:55.118170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.508 [2024-10-01 15:58:55.118178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.508 [2024-10-01 15:58:55.118623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.508 [2024-10-01 15:58:55.118793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.508 [2024-10-01 15:58:55.118804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.508 [2024-10-01 15:58:55.118811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.508 [2024-10-01 15:58:55.118841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.508 [2024-10-01 15:58:55.129490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.508 [2024-10-01 15:58:55.129654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.508 [2024-10-01 15:58:55.129668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.509 [2024-10-01 15:58:55.129675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.509 [2024-10-01 15:58:55.129687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.509 [2024-10-01 15:58:55.129698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.509 [2024-10-01 15:58:55.129705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.509 [2024-10-01 15:58:55.129711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.509 [2024-10-01 15:58:55.129724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.509 [2024-10-01 15:58:55.140713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.509 [2024-10-01 15:58:55.140943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.509 [2024-10-01 15:58:55.140960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.509 [2024-10-01 15:58:55.140967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.509 [2024-10-01 15:58:55.140979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.509 [2024-10-01 15:58:55.140990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.509 [2024-10-01 15:58:55.140997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.509 [2024-10-01 15:58:55.141003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.509 [2024-10-01 15:58:55.141021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.509 [2024-10-01 15:58:55.151962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.509 [2024-10-01 15:58:55.152512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.509 [2024-10-01 15:58:55.152532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.509 [2024-10-01 15:58:55.152540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.509 [2024-10-01 15:58:55.152800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.509 [2024-10-01 15:58:55.152959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.509 [2024-10-01 15:58:55.152970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.509 [2024-10-01 15:58:55.152977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.509 [2024-10-01 15:58:55.153008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.509 [2024-10-01 15:58:55.164291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.509 [2024-10-01 15:58:55.164513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.509 [2024-10-01 15:58:55.164528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.509 [2024-10-01 15:58:55.164537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.509 [2024-10-01 15:58:55.164548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.509 [2024-10-01 15:58:55.164559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.509 [2024-10-01 15:58:55.164565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.509 [2024-10-01 15:58:55.164572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.509 [2024-10-01 15:58:55.164585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.509 [2024-10-01 15:58:55.175107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.509 [2024-10-01 15:58:55.175265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.509 [2024-10-01 15:58:55.175279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.509 [2024-10-01 15:58:55.175287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.509 [2024-10-01 15:58:55.175298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.509 [2024-10-01 15:58:55.175309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.509 [2024-10-01 15:58:55.175315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.509 [2024-10-01 15:58:55.175322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.509 [2024-10-01 15:58:55.175335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.509 [2024-10-01 15:58:55.186476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.509 [2024-10-01 15:58:55.186738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.509 [2024-10-01 15:58:55.186754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.509 [2024-10-01 15:58:55.186765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.509 [2024-10-01 15:58:55.187078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.509 [2024-10-01 15:58:55.187233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.509 [2024-10-01 15:58:55.187244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.509 [2024-10-01 15:58:55.187251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.509 [2024-10-01 15:58:55.187394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.509 [2024-10-01 15:58:55.198304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.509 [2024-10-01 15:58:55.198554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.509 [2024-10-01 15:58:55.198570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.509 [2024-10-01 15:58:55.198578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.509 [2024-10-01 15:58:55.198590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.509 [2024-10-01 15:58:55.198608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.509 [2024-10-01 15:58:55.198615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.509 [2024-10-01 15:58:55.198621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.509 [2024-10-01 15:58:55.198634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.509 [2024-10-01 15:58:55.209949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.509 [2024-10-01 15:58:55.210204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.509 [2024-10-01 15:58:55.210219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.509 [2024-10-01 15:58:55.210227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.509 [2024-10-01 15:58:55.210239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.509 [2024-10-01 15:58:55.210249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.509 [2024-10-01 15:58:55.210256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.509 [2024-10-01 15:58:55.210262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.509 [2024-10-01 15:58:55.210275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.509 [2024-10-01 15:58:55.221398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.509 [2024-10-01 15:58:55.221643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.509 [2024-10-01 15:58:55.221660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.509 [2024-10-01 15:58:55.221667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.509 [2024-10-01 15:58:55.221679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.509 [2024-10-01 15:58:55.221690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.509 [2024-10-01 15:58:55.221700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.509 [2024-10-01 15:58:55.221706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.509 [2024-10-01 15:58:55.221720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.509 [2024-10-01 15:58:55.232668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.509 [2024-10-01 15:58:55.232982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.509 [2024-10-01 15:58:55.233000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.509 [2024-10-01 15:58:55.233008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.509 [2024-10-01 15:58:55.233151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.509 [2024-10-01 15:58:55.233180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.509 [2024-10-01 15:58:55.233187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.509 [2024-10-01 15:58:55.233194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.509 [2024-10-01 15:58:55.233208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.509 [2024-10-01 15:58:55.243266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.509 [2024-10-01 15:58:55.243442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.510 [2024-10-01 15:58:55.243457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.510 [2024-10-01 15:58:55.243465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.510 [2024-10-01 15:58:55.243476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.510 [2024-10-01 15:58:55.243487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.510 [2024-10-01 15:58:55.243493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.510 [2024-10-01 15:58:55.243499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.510 [2024-10-01 15:58:55.243512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.510 [2024-10-01 15:58:55.254771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.510 [2024-10-01 15:58:55.255021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.510 [2024-10-01 15:58:55.255039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.510 [2024-10-01 15:58:55.255047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.510 [2024-10-01 15:58:55.255059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.510 [2024-10-01 15:58:55.255070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.510 [2024-10-01 15:58:55.255077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.510 [2024-10-01 15:58:55.255083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.510 [2024-10-01 15:58:55.255097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.510 [2024-10-01 15:58:55.266007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.510 [2024-10-01 15:58:55.266120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.510 [2024-10-01 15:58:55.266134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.510 [2024-10-01 15:58:55.266142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.510 [2024-10-01 15:58:55.266153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.510 [2024-10-01 15:58:55.266164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.510 [2024-10-01 15:58:55.266170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.510 [2024-10-01 15:58:55.266177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.510 [2024-10-01 15:58:55.266190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.510 [2024-10-01 15:58:55.276073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.510 [2024-10-01 15:58:55.276317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.510 [2024-10-01 15:58:55.276332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.510 [2024-10-01 15:58:55.276340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.510 [2024-10-01 15:58:55.276352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.510 [2024-10-01 15:58:55.276363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.510 [2024-10-01 15:58:55.276370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.510 [2024-10-01 15:58:55.276376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.510 [2024-10-01 15:58:55.276389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.510 [2024-10-01 15:58:55.286882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.510 [2024-10-01 15:58:55.287103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.510 [2024-10-01 15:58:55.287118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.510 [2024-10-01 15:58:55.287126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.510 [2024-10-01 15:58:55.287310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.510 [2024-10-01 15:58:55.287462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.510 [2024-10-01 15:58:55.287472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.510 [2024-10-01 15:58:55.287479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.510 [2024-10-01 15:58:55.287509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.510 [2024-10-01 15:58:55.298379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.510 [2024-10-01 15:58:55.298577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.510 [2024-10-01 15:58:55.298594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.510 [2024-10-01 15:58:55.298601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.510 [2024-10-01 15:58:55.298617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.510 [2024-10-01 15:58:55.298629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.510 [2024-10-01 15:58:55.298635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.510 [2024-10-01 15:58:55.298642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.510 [2024-10-01 15:58:55.298656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.510 [2024-10-01 15:58:55.309012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.510 [2024-10-01 15:58:55.309259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.510 [2024-10-01 15:58:55.309274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.510 [2024-10-01 15:58:55.309282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.510 [2024-10-01 15:58:55.309971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.510 [2024-10-01 15:58:55.310441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.510 [2024-10-01 15:58:55.310454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.510 [2024-10-01 15:58:55.310461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.510 [2024-10-01 15:58:55.310625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.510 [2024-10-01 15:58:55.321318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.510 [2024-10-01 15:58:55.321642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.510 [2024-10-01 15:58:55.321660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.510 [2024-10-01 15:58:55.321668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.510 [2024-10-01 15:58:55.321811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.510 [2024-10-01 15:58:55.321840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.510 [2024-10-01 15:58:55.321847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.510 [2024-10-01 15:58:55.321854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.510 [2024-10-01 15:58:55.321874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.510 [2024-10-01 15:58:55.331705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.510 [2024-10-01 15:58:55.331903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.510 [2024-10-01 15:58:55.331919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.510 [2024-10-01 15:58:55.331927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.510 [2024-10-01 15:58:55.332056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.510 [2024-10-01 15:58:55.332086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.510 [2024-10-01 15:58:55.332093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.510 [2024-10-01 15:58:55.332114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.510 [2024-10-01 15:58:55.332127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.510 [2024-10-01 15:58:55.343059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.510 [2024-10-01 15:58:55.343254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.510 [2024-10-01 15:58:55.343268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.510 [2024-10-01 15:58:55.343276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.510 [2024-10-01 15:58:55.343287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.510 [2024-10-01 15:58:55.343298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.510 [2024-10-01 15:58:55.343305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.510 [2024-10-01 15:58:55.343311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.510 [2024-10-01 15:58:55.343325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.510 [2024-10-01 15:58:55.354331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.510 [2024-10-01 15:58:55.354452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.510 [2024-10-01 15:58:55.354466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.510 [2024-10-01 15:58:55.354474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.510 [2024-10-01 15:58:55.354486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.510 [2024-10-01 15:58:55.354496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.510 [2024-10-01 15:58:55.354503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.510 [2024-10-01 15:58:55.354509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.510 [2024-10-01 15:58:55.354522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.510 [2024-10-01 15:58:55.366947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.510 [2024-10-01 15:58:55.367323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.510 [2024-10-01 15:58:55.367342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.511 [2024-10-01 15:58:55.367350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.511 [2024-10-01 15:58:55.367495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.511 [2024-10-01 15:58:55.367525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.511 [2024-10-01 15:58:55.367533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.511 [2024-10-01 15:58:55.367540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.511 [2024-10-01 15:58:55.367554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.511 [2024-10-01 15:58:55.377877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.511 [2024-10-01 15:58:55.378124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.511 [2024-10-01 15:58:55.378142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.511 [2024-10-01 15:58:55.378150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.511 [2024-10-01 15:58:55.378292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.511 [2024-10-01 15:58:55.378322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.511 [2024-10-01 15:58:55.378329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.511 [2024-10-01 15:58:55.378336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.511 [2024-10-01 15:58:55.378350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.511 [2024-10-01 15:58:55.389206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.511 [2024-10-01 15:58:55.389317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.511 [2024-10-01 15:58:55.389331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.511 [2024-10-01 15:58:55.389339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.511 [2024-10-01 15:58:55.389350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.511 [2024-10-01 15:58:55.389360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.511 [2024-10-01 15:58:55.389367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.511 [2024-10-01 15:58:55.389373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.511 [2024-10-01 15:58:55.389386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.511 [2024-10-01 15:58:55.400295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.511 [2024-10-01 15:58:55.400562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.511 [2024-10-01 15:58:55.400578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.511 [2024-10-01 15:58:55.400585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.511 [2024-10-01 15:58:55.400597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.511 [2024-10-01 15:58:55.400608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.511 [2024-10-01 15:58:55.400614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.511 [2024-10-01 15:58:55.400621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.511 [2024-10-01 15:58:55.400634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.511 [2024-10-01 15:58:55.411443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.511 [2024-10-01 15:58:55.411825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.511 [2024-10-01 15:58:55.411843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.511 [2024-10-01 15:58:55.411851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.511 [2024-10-01 15:58:55.412182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.511 [2024-10-01 15:58:55.412339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.511 [2024-10-01 15:58:55.412350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.511 [2024-10-01 15:58:55.412358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.511 [2024-10-01 15:58:55.412500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.511 [2024-10-01 15:58:55.424219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.511 [2024-10-01 15:58:55.424406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.511 [2024-10-01 15:58:55.424420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.511 [2024-10-01 15:58:55.424427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.511 [2024-10-01 15:58:55.424439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.511 [2024-10-01 15:58:55.424451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.511 [2024-10-01 15:58:55.424457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.511 [2024-10-01 15:58:55.424464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.511 [2024-10-01 15:58:55.424477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.511 [2024-10-01 15:58:55.435093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.511 [2024-10-01 15:58:55.435336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.511 [2024-10-01 15:58:55.435351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.511 [2024-10-01 15:58:55.435359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.511 [2024-10-01 15:58:55.435371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.511 [2024-10-01 15:58:55.435382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.511 [2024-10-01 15:58:55.435389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.511 [2024-10-01 15:58:55.435395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.511 [2024-10-01 15:58:55.435408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.511 [2024-10-01 15:58:55.445690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.511 [2024-10-01 15:58:55.445962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.511 [2024-10-01 15:58:55.445978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.511 [2024-10-01 15:58:55.445986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.511 [2024-10-01 15:58:55.445998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.511 [2024-10-01 15:58:55.446009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.511 [2024-10-01 15:58:55.446016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.511 [2024-10-01 15:58:55.446022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.511 [2024-10-01 15:58:55.446039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.511 [2024-10-01 15:58:55.456819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.511 [2024-10-01 15:58:55.457011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.511 [2024-10-01 15:58:55.457027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.511 [2024-10-01 15:58:55.457035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.511 [2024-10-01 15:58:55.457371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.511 [2024-10-01 15:58:55.457528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.511 [2024-10-01 15:58:55.457539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.511 [2024-10-01 15:58:55.457545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.511 [2024-10-01 15:58:55.457689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.511 [2024-10-01 15:58:55.468175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.511 [2024-10-01 15:58:55.468534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.511 [2024-10-01 15:58:55.468553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.511 [2024-10-01 15:58:55.468561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.511 [2024-10-01 15:58:55.468705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.511 [2024-10-01 15:58:55.468735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.511 [2024-10-01 15:58:55.468742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.511 [2024-10-01 15:58:55.468749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.511 [2024-10-01 15:58:55.468763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.511 [2024-10-01 15:58:55.478928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.511 [2024-10-01 15:58:55.479114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.511 [2024-10-01 15:58:55.479129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.511 [2024-10-01 15:58:55.479137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.511 [2024-10-01 15:58:55.479266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.511 [2024-10-01 15:58:55.479297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.511 [2024-10-01 15:58:55.479305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.511 [2024-10-01 15:58:55.479311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.511 [2024-10-01 15:58:55.479325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.511 [2024-10-01 15:58:55.490877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.511 [2024-10-01 15:58:55.491102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.511 [2024-10-01 15:58:55.491121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.511 [2024-10-01 15:58:55.491128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.512 [2024-10-01 15:58:55.491140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.512 [2024-10-01 15:58:55.491151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.512 [2024-10-01 15:58:55.491157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.512 [2024-10-01 15:58:55.491164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.512 [2024-10-01 15:58:55.491176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.512 [2024-10-01 15:58:55.503231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.512 [2024-10-01 15:58:55.503410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.512 [2024-10-01 15:58:55.503425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.512 [2024-10-01 15:58:55.503432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.512 [2024-10-01 15:58:55.503444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.512 [2024-10-01 15:58:55.503455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.512 [2024-10-01 15:58:55.503462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.512 [2024-10-01 15:58:55.503469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.512 [2024-10-01 15:58:55.503482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.512 [2024-10-01 15:58:55.514551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.512 [2024-10-01 15:58:55.514877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.512 [2024-10-01 15:58:55.514895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.512 [2024-10-01 15:58:55.514903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.512 [2024-10-01 15:58:55.515248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.512 [2024-10-01 15:58:55.515405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.512 [2024-10-01 15:58:55.515416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.512 [2024-10-01 15:58:55.515422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.512 [2024-10-01 15:58:55.515453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.512 [2024-10-01 15:58:55.527370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.512 [2024-10-01 15:58:55.527544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.512 [2024-10-01 15:58:55.527559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.512 [2024-10-01 15:58:55.527566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.512 [2024-10-01 15:58:55.527578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.512 [2024-10-01 15:58:55.527592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.512 [2024-10-01 15:58:55.527598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.512 [2024-10-01 15:58:55.527605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.512 [2024-10-01 15:58:55.527618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.512 [2024-10-01 15:58:55.538360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.512 [2024-10-01 15:58:55.538583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.512 [2024-10-01 15:58:55.538599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.512 [2024-10-01 15:58:55.538606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.512 [2024-10-01 15:58:55.538618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.512 [2024-10-01 15:58:55.538628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.512 [2024-10-01 15:58:55.538635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.512 [2024-10-01 15:58:55.538641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.512 [2024-10-01 15:58:55.538654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.512 [2024-10-01 15:58:55.549720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.512 [2024-10-01 15:58:55.549974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.512 [2024-10-01 15:58:55.549992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.512 [2024-10-01 15:58:55.550000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.512 [2024-10-01 15:58:55.550140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.512 [2024-10-01 15:58:55.550169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.512 [2024-10-01 15:58:55.550176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.512 [2024-10-01 15:58:55.550183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.512 [2024-10-01 15:58:55.550197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.512 [2024-10-01 15:58:55.560831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.512 [2024-10-01 15:58:55.561237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.512 [2024-10-01 15:58:55.561256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.512 [2024-10-01 15:58:55.561264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.512 [2024-10-01 15:58:55.561408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.512 [2024-10-01 15:58:55.561439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.512 [2024-10-01 15:58:55.561447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.512 [2024-10-01 15:58:55.561453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.512 [2024-10-01 15:58:55.561467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.512 [2024-10-01 15:58:55.572032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.512 [2024-10-01 15:58:55.572333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.512 [2024-10-01 15:58:55.572350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.512 [2024-10-01 15:58:55.572358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.512 [2024-10-01 15:58:55.572386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.512 [2024-10-01 15:58:55.572398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.512 [2024-10-01 15:58:55.572404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.512 [2024-10-01 15:58:55.572410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.512 [2024-10-01 15:58:55.572540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.512 [2024-10-01 15:58:55.584384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.512 [2024-10-01 15:58:55.584554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.512 [2024-10-01 15:58:55.584568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.512 [2024-10-01 15:58:55.584575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.512 [2024-10-01 15:58:55.584594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.512 [2024-10-01 15:58:55.584605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.512 [2024-10-01 15:58:55.584611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.512 [2024-10-01 15:58:55.584618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.512 [2024-10-01 15:58:55.584631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.512 [2024-10-01 15:58:55.595938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.512 [2024-10-01 15:58:55.596160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.512 [2024-10-01 15:58:55.596175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.512 [2024-10-01 15:58:55.596183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.512 [2024-10-01 15:58:55.596194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.512 [2024-10-01 15:58:55.596205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.512 [2024-10-01 15:58:55.596211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.512 [2024-10-01 15:58:55.596217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.512 [2024-10-01 15:58:55.596230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.512 [2024-10-01 15:58:55.606835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.512 [2024-10-01 15:58:55.607080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.512 [2024-10-01 15:58:55.607095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.512 [2024-10-01 15:58:55.607106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.512 [2024-10-01 15:58:55.607118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.512 [2024-10-01 15:58:55.607128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.512 [2024-10-01 15:58:55.607135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.512 [2024-10-01 15:58:55.607141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.512 [2024-10-01 15:58:55.607154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.512 [2024-10-01 15:58:55.619061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.512 [2024-10-01 15:58:55.619420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.512 [2024-10-01 15:58:55.619438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.512 [2024-10-01 15:58:55.619446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.512 [2024-10-01 15:58:55.620006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.513 [2024-10-01 15:58:55.620217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.513 [2024-10-01 15:58:55.620228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.513 [2024-10-01 15:58:55.620235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.513 [2024-10-01 15:58:55.620382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.513 [2024-10-01 15:58:55.629128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.513 [2024-10-01 15:58:55.629350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.513 [2024-10-01 15:58:55.629365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.513 [2024-10-01 15:58:55.629372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.513 [2024-10-01 15:58:55.629385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.513 [2024-10-01 15:58:55.629396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.513 [2024-10-01 15:58:55.629402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.513 [2024-10-01 15:58:55.629409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.513 [2024-10-01 15:58:55.629422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.513 [2024-10-01 15:58:55.639645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.513 [2024-10-01 15:58:55.639893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.513 [2024-10-01 15:58:55.639911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.513 [2024-10-01 15:58:55.639919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.513 [2024-10-01 15:58:55.640049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.513 [2024-10-01 15:58:55.640088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.513 [2024-10-01 15:58:55.640096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.513 [2024-10-01 15:58:55.640106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.513 [2024-10-01 15:58:55.640120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.513 [2024-10-01 15:58:55.649860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.513 [2024-10-01 15:58:55.650108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.513 [2024-10-01 15:58:55.650124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.513 [2024-10-01 15:58:55.650131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.513 [2024-10-01 15:58:55.650527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.513 [2024-10-01 15:58:55.650577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.513 [2024-10-01 15:58:55.650585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.513 [2024-10-01 15:58:55.650591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.513 [2024-10-01 15:58:55.650605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.513 [2024-10-01 15:58:55.661166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.513 [2024-10-01 15:58:55.661727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.513 [2024-10-01 15:58:55.661747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.513 [2024-10-01 15:58:55.661754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.513 [2024-10-01 15:58:55.662029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.513 [2024-10-01 15:58:55.662071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.513 [2024-10-01 15:58:55.662079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.513 [2024-10-01 15:58:55.662085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.513 [2024-10-01 15:58:55.662099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.513 [2024-10-01 15:58:55.672421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.513 [2024-10-01 15:58:55.672792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.513 [2024-10-01 15:58:55.672810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.513 [2024-10-01 15:58:55.672818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.513 [2024-10-01 15:58:55.672974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.513 [2024-10-01 15:58:55.673005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.513 [2024-10-01 15:58:55.673012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.513 [2024-10-01 15:58:55.673019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.513 [2024-10-01 15:58:55.673033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.513 [2024-10-01 15:58:55.683182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.513 [2024-10-01 15:58:55.683381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.513 [2024-10-01 15:58:55.683395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.513 [2024-10-01 15:58:55.683403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.513 [2024-10-01 15:58:55.683415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.513 [2024-10-01 15:58:55.683426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.513 [2024-10-01 15:58:55.683432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.513 [2024-10-01 15:58:55.683438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.513 [2024-10-01 15:58:55.683605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.513 [2024-10-01 15:58:55.694160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.513 [2024-10-01 15:58:55.694398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.513 [2024-10-01 15:58:55.694414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.513 [2024-10-01 15:58:55.694422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.513 [2024-10-01 15:58:55.694434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.513 [2024-10-01 15:58:55.694445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.513 [2024-10-01 15:58:55.694451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.513 [2024-10-01 15:58:55.694458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.513 [2024-10-01 15:58:55.694470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.513 [2024-10-01 15:58:55.704225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.513 [2024-10-01 15:58:55.704447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.513 [2024-10-01 15:58:55.704462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.513 [2024-10-01 15:58:55.704469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.513 [2024-10-01 15:58:55.704481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.513 [2024-10-01 15:58:55.704492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.513 [2024-10-01 15:58:55.704498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.513 [2024-10-01 15:58:55.704505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.513 [2024-10-01 15:58:55.704518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.513 [2024-10-01 15:58:55.714289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.513 [2024-10-01 15:58:55.714536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.513 [2024-10-01 15:58:55.714552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.513 [2024-10-01 15:58:55.714559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.513 [2024-10-01 15:58:55.714575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.513 [2024-10-01 15:58:55.714585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.513 [2024-10-01 15:58:55.714591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.513 [2024-10-01 15:58:55.714597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.513 [2024-10-01 15:58:55.714610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.513 [2024-10-01 15:58:55.724354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.513 [2024-10-01 15:58:55.724577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.513 [2024-10-01 15:58:55.724592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.514 [2024-10-01 15:58:55.724599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.514 [2024-10-01 15:58:55.724612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.514 [2024-10-01 15:58:55.724623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.514 [2024-10-01 15:58:55.724629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.514 [2024-10-01 15:58:55.724635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.514 [2024-10-01 15:58:55.724648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.514 [2024-10-01 15:58:55.734418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.514 [2024-10-01 15:58:55.734662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.514 [2024-10-01 15:58:55.734678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.514 [2024-10-01 15:58:55.734685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.514 [2024-10-01 15:58:55.734814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.514 [2024-10-01 15:58:55.734963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.514 [2024-10-01 15:58:55.734974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.514 [2024-10-01 15:58:55.734981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.514 [2024-10-01 15:58:55.735012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.514 [2024-10-01 15:58:55.744759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.514 [2024-10-01 15:58:55.744953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.514 [2024-10-01 15:58:55.744968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.514 [2024-10-01 15:58:55.744976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.514 [2024-10-01 15:58:55.744988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.514 [2024-10-01 15:58:55.745000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.514 [2024-10-01 15:58:55.745006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.514 [2024-10-01 15:58:55.745012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.514 [2024-10-01 15:58:55.745029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.514 [2024-10-01 15:58:55.756314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.514 [2024-10-01 15:58:55.756508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.514 [2024-10-01 15:58:55.756523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.514 [2024-10-01 15:58:55.756531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.514 [2024-10-01 15:58:55.756543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.514 [2024-10-01 15:58:55.756553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.514 [2024-10-01 15:58:55.756560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.514 [2024-10-01 15:58:55.756566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.514 [2024-10-01 15:58:55.756579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.514 [2024-10-01 15:58:55.768373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.514 [2024-10-01 15:58:55.768803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.514 [2024-10-01 15:58:55.768823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.514 [2024-10-01 15:58:55.768831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.514 [2024-10-01 15:58:55.768980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.514 [2024-10-01 15:58:55.769017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.514 [2024-10-01 15:58:55.769025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.514 [2024-10-01 15:58:55.769032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.514 [2024-10-01 15:58:55.769045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.514 [2024-10-01 15:58:55.778759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.514 [2024-10-01 15:58:55.779005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.514 [2024-10-01 15:58:55.779021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.514 [2024-10-01 15:58:55.779029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.514 [2024-10-01 15:58:55.779475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.514 [2024-10-01 15:58:55.779646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.514 [2024-10-01 15:58:55.779657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.514 [2024-10-01 15:58:55.779664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.514 [2024-10-01 15:58:55.779838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.514 [2024-10-01 15:58:55.790197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.514 [2024-10-01 15:58:55.790420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.514 [2024-10-01 15:58:55.790439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.514 [2024-10-01 15:58:55.790447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.514 [2024-10-01 15:58:55.790459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.514 [2024-10-01 15:58:55.790470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.514 [2024-10-01 15:58:55.790476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.514 [2024-10-01 15:58:55.790482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.514 [2024-10-01 15:58:55.790495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.514 [2024-10-01 15:58:55.801777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.514 [2024-10-01 15:58:55.801946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.514 [2024-10-01 15:58:55.801961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.514 [2024-10-01 15:58:55.801968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.514 [2024-10-01 15:58:55.801980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.514 [2024-10-01 15:58:55.801991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.514 [2024-10-01 15:58:55.801997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.514 [2024-10-01 15:58:55.802004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.514 [2024-10-01 15:58:55.802017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.514 [2024-10-01 15:58:55.813135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.514 [2024-10-01 15:58:55.813437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.514 [2024-10-01 15:58:55.813455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.514 [2024-10-01 15:58:55.813463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.514 [2024-10-01 15:58:55.813491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.514 [2024-10-01 15:58:55.813502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.514 [2024-10-01 15:58:55.813508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.514 [2024-10-01 15:58:55.813515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.514 [2024-10-01 15:58:55.813529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.514 [2024-10-01 15:58:55.824035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.514 [2024-10-01 15:58:55.824214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.514 [2024-10-01 15:58:55.824229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.514 [2024-10-01 15:58:55.824236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.514 [2024-10-01 15:58:55.824248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.514 [2024-10-01 15:58:55.824263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.514 [2024-10-01 15:58:55.824270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.514 [2024-10-01 15:58:55.824276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.514 [2024-10-01 15:58:55.824289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.514 [2024-10-01 15:58:55.834792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.514 [2024-10-01 15:58:55.835012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.514 [2024-10-01 15:58:55.835029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.514 [2024-10-01 15:58:55.835037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.514 [2024-10-01 15:58:55.835049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.514 [2024-10-01 15:58:55.835060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.514 [2024-10-01 15:58:55.835066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.514 [2024-10-01 15:58:55.835073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.514 [2024-10-01 15:58:55.835086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.514 [2024-10-01 15:58:55.844859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.514 [2024-10-01 15:58:55.845023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.515 [2024-10-01 15:58:55.845038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.515 [2024-10-01 15:58:55.845046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.515 [2024-10-01 15:58:55.845058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.515 [2024-10-01 15:58:55.845068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.515 [2024-10-01 15:58:55.845075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.515 [2024-10-01 15:58:55.845081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.515 [2024-10-01 15:58:55.845094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.515 [2024-10-01 15:58:55.856390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.515 [2024-10-01 15:58:55.856603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.515 [2024-10-01 15:58:55.856619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.515 [2024-10-01 15:58:55.856627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.515 [2024-10-01 15:58:55.856639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.515 [2024-10-01 15:58:55.856649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.515 [2024-10-01 15:58:55.856655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.515 [2024-10-01 15:58:55.856662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.515 [2024-10-01 15:58:55.856675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.515 [2024-10-01 15:58:55.867607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.515 [2024-10-01 15:58:55.867973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.515 [2024-10-01 15:58:55.867992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.515 [2024-10-01 15:58:55.868000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.515 [2024-10-01 15:58:55.868175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.515 [2024-10-01 15:58:55.868208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.515 [2024-10-01 15:58:55.868216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.515 [2024-10-01 15:58:55.868222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.515 [2024-10-01 15:58:55.868236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.515 [2024-10-01 15:58:55.878112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.515 [2024-10-01 15:58:55.878333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.515 [2024-10-01 15:58:55.878348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.515 [2024-10-01 15:58:55.878356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.515 [2024-10-01 15:58:55.878367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.515 [2024-10-01 15:58:55.878378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.515 [2024-10-01 15:58:55.878385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.515 [2024-10-01 15:58:55.878391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.515 [2024-10-01 15:58:55.878404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.515 [2024-10-01 15:58:55.890037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.515 [2024-10-01 15:58:55.890388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.515 [2024-10-01 15:58:55.890406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.515 [2024-10-01 15:58:55.890414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.515 [2024-10-01 15:58:55.890565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.515 [2024-10-01 15:58:55.890596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.515 [2024-10-01 15:58:55.890602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.515 [2024-10-01 15:58:55.890609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.515 [2024-10-01 15:58:55.890738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.515 [2024-10-01 15:58:55.902701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.515 [2024-10-01 15:58:55.902948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.515 [2024-10-01 15:58:55.902964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.515 [2024-10-01 15:58:55.902979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.515 [2024-10-01 15:58:55.902991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.515 [2024-10-01 15:58:55.903002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.515 [2024-10-01 15:58:55.903008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.515 [2024-10-01 15:58:55.903014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.515 [2024-10-01 15:58:55.903028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.515 [2024-10-01 15:58:55.913632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.515 [2024-10-01 15:58:55.913877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.515 [2024-10-01 15:58:55.913893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.515 [2024-10-01 15:58:55.913901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.515 [2024-10-01 15:58:55.913913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.515 [2024-10-01 15:58:55.913924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.515 [2024-10-01 15:58:55.913930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.515 [2024-10-01 15:58:55.913936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.515 [2024-10-01 15:58:55.913950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.515 [2024-10-01 15:58:55.925836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.515 [2024-10-01 15:58:55.926089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.515 [2024-10-01 15:58:55.926105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.515 [2024-10-01 15:58:55.926113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.515 [2024-10-01 15:58:55.926125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.515 [2024-10-01 15:58:55.926136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.515 [2024-10-01 15:58:55.926142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.515 [2024-10-01 15:58:55.926148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.515 [2024-10-01 15:58:55.926162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.515 [2024-10-01 15:58:55.937059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.515 [2024-10-01 15:58:55.937231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.515 [2024-10-01 15:58:55.937246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.515 [2024-10-01 15:58:55.937253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.515 [2024-10-01 15:58:55.937265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.515 [2024-10-01 15:58:55.937276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.515 [2024-10-01 15:58:55.937282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.515 [2024-10-01 15:58:55.937292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.515 [2024-10-01 15:58:55.937306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.515 [2024-10-01 15:58:55.948401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.515 [2024-10-01 15:58:55.948642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.515 [2024-10-01 15:58:55.948657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.515 [2024-10-01 15:58:55.948664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.515 [2024-10-01 15:58:55.948849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.515 [2024-10-01 15:58:55.948998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.515 [2024-10-01 15:58:55.949009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.515 [2024-10-01 15:58:55.949016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.515 [2024-10-01 15:58:55.949055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.515 [2024-10-01 15:58:55.958825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.515 [2024-10-01 15:58:55.959188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.515 [2024-10-01 15:58:55.959206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.515 [2024-10-01 15:58:55.959213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.515 [2024-10-01 15:58:55.959354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.515 [2024-10-01 15:58:55.959392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.515 [2024-10-01 15:58:55.959400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.515 [2024-10-01 15:58:55.959407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.515 [2024-10-01 15:58:55.959421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.515 [2024-10-01 15:58:55.969748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.515 [2024-10-01 15:58:55.969974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.515 [2024-10-01 15:58:55.969990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.516 [2024-10-01 15:58:55.969999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.516 [2024-10-01 15:58:55.970011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.516 [2024-10-01 15:58:55.970021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.516 [2024-10-01 15:58:55.970027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.516 [2024-10-01 15:58:55.970034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.516 [2024-10-01 15:58:55.970047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.516 11329.75 IOPS, 44.26 MiB/s [2024-10-01 15:58:55.981757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.516 [2024-10-01 15:58:55.981881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.516 [2024-10-01 15:58:55.981896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.516 [2024-10-01 15:58:55.981904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.516 [2024-10-01 15:58:55.981916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.516 [2024-10-01 15:58:55.981927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.516 [2024-10-01 15:58:55.981933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.516 [2024-10-01 15:58:55.981939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.516 [2024-10-01 15:58:55.981952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.516 [2024-10-01 15:58:55.993479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.516 [2024-10-01 15:58:55.993726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.516 [2024-10-01 15:58:55.993743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.516 [2024-10-01 15:58:55.993750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.516 [2024-10-01 15:58:55.993762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.516 [2024-10-01 15:58:55.993773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.516 [2024-10-01 15:58:55.993779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.516 [2024-10-01 15:58:55.993786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.516 [2024-10-01 15:58:55.993799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.516 [2024-10-01 15:58:56.005764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.516 [2024-10-01 15:58:56.006192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.516 [2024-10-01 15:58:56.006211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.516 [2024-10-01 15:58:56.006219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.516 [2024-10-01 15:58:56.006363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.516 [2024-10-01 15:58:56.006505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.516 [2024-10-01 15:58:56.006516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.516 [2024-10-01 15:58:56.006523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.516 [2024-10-01 15:58:56.006551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.516 [2024-10-01 15:58:56.016968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.516 [2024-10-01 15:58:56.017283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.516 [2024-10-01 15:58:56.017302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.516 [2024-10-01 15:58:56.017310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.516 [2024-10-01 15:58:56.017459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.516 [2024-10-01 15:58:56.017486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.516 [2024-10-01 15:58:56.017494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.516 [2024-10-01 15:58:56.017500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.516 [2024-10-01 15:58:56.017515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.516 [2024-10-01 15:58:56.028035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.516 [2024-10-01 15:58:56.028451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.516 [2024-10-01 15:58:56.028470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.516 [2024-10-01 15:58:56.028478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.516 [2024-10-01 15:58:56.028622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.516 [2024-10-01 15:58:56.028653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.516 [2024-10-01 15:58:56.028660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.516 [2024-10-01 15:58:56.028667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.516 [2024-10-01 15:58:56.028682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.516 [2024-10-01 15:58:56.039668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.516 [2024-10-01 15:58:56.040005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.516 [2024-10-01 15:58:56.040023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.516 [2024-10-01 15:58:56.040031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.516 [2024-10-01 15:58:56.040176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.516 [2024-10-01 15:58:56.040207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.516 [2024-10-01 15:58:56.040214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.516 [2024-10-01 15:58:56.040221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.516 [2024-10-01 15:58:56.040236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.516 [2024-10-01 15:58:56.050459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.516 [2024-10-01 15:58:56.050836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.516 [2024-10-01 15:58:56.050854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.516 [2024-10-01 15:58:56.050868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.516 [2024-10-01 15:58:56.051011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.516 [2024-10-01 15:58:56.051037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.516 [2024-10-01 15:58:56.051044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.516 [2024-10-01 15:58:56.051055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.516 [2024-10-01 15:58:56.051069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.516 [2024-10-01 15:58:56.062148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.516 [2024-10-01 15:58:56.062378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.516 [2024-10-01 15:58:56.062392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.516 [2024-10-01 15:58:56.062400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.516 [2024-10-01 15:58:56.062412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.516 [2024-10-01 15:58:56.062423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.516 [2024-10-01 15:58:56.062430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.516 [2024-10-01 15:58:56.062437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.516 [2024-10-01 15:58:56.062450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.516 [2024-10-01 15:58:56.074316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.516 [2024-10-01 15:58:56.074497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.516 [2024-10-01 15:58:56.074512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.516 [2024-10-01 15:58:56.074519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.516 [2024-10-01 15:58:56.074531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.516 [2024-10-01 15:58:56.074542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.516 [2024-10-01 15:58:56.074548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.516 [2024-10-01 15:58:56.074554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.516 [2024-10-01 15:58:56.074567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.516 [2024-10-01 15:58:56.085930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.516 [2024-10-01 15:58:56.086108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.516 [2024-10-01 15:58:56.086122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.516 [2024-10-01 15:58:56.086130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.516 [2024-10-01 15:58:56.086141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.516 [2024-10-01 15:58:56.086152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.516 [2024-10-01 15:58:56.086159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.516 [2024-10-01 15:58:56.086166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.516 [2024-10-01 15:58:56.086179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.516 [2024-10-01 15:58:56.096679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.516 [2024-10-01 15:58:56.096839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.516 [2024-10-01 15:58:56.096858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.516 [2024-10-01 15:58:56.096870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.517 [2024-10-01 15:58:56.096882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.517 [2024-10-01 15:58:56.096893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.517 [2024-10-01 15:58:56.096899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.517 [2024-10-01 15:58:56.096906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.517 [2024-10-01 15:58:56.096919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.517 [2024-10-01 15:58:56.108841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.517 [2024-10-01 15:58:56.109172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.517 [2024-10-01 15:58:56.109191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.517 [2024-10-01 15:58:56.109199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.517 [2024-10-01 15:58:56.109227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.517 [2024-10-01 15:58:56.109239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.517 [2024-10-01 15:58:56.109245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.517 [2024-10-01 15:58:56.109252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.517 [2024-10-01 15:58:56.109265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.517 [2024-10-01 15:58:56.120002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.517 [2024-10-01 15:58:56.120321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.517 [2024-10-01 15:58:56.120339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.517 [2024-10-01 15:58:56.120347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.517 [2024-10-01 15:58:56.120375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.517 [2024-10-01 15:58:56.120387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.517 [2024-10-01 15:58:56.120393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.517 [2024-10-01 15:58:56.120400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.517 [2024-10-01 15:58:56.120561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.517 [2024-10-01 15:58:56.131246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.517 [2024-10-01 15:58:56.131593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.517 [2024-10-01 15:58:56.131611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.517 [2024-10-01 15:58:56.131620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.517 [2024-10-01 15:58:56.131765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.517 [2024-10-01 15:58:56.131796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.517 [2024-10-01 15:58:56.131803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.517 [2024-10-01 15:58:56.131809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.517 [2024-10-01 15:58:56.131823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.517 [2024-10-01 15:58:56.142332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.517 [2024-10-01 15:58:56.142644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.517 [2024-10-01 15:58:56.142662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.517 [2024-10-01 15:58:56.142670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.517 [2024-10-01 15:58:56.142927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.517 [2024-10-01 15:58:56.142960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.517 [2024-10-01 15:58:56.142968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.517 [2024-10-01 15:58:56.142975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.517 [2024-10-01 15:58:56.142988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.517 [2024-10-01 15:58:56.152424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.517 [2024-10-01 15:58:56.152605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.517 [2024-10-01 15:58:56.152619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.517 [2024-10-01 15:58:56.152626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.517 [2024-10-01 15:58:56.152638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.517 [2024-10-01 15:58:56.152649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.517 [2024-10-01 15:58:56.152655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.517 [2024-10-01 15:58:56.152662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.517 [2024-10-01 15:58:56.152675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.517 [2024-10-01 15:58:56.162854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.517 [2024-10-01 15:58:56.163036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.517 [2024-10-01 15:58:56.163050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.517 [2024-10-01 15:58:56.163057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.517 [2024-10-01 15:58:56.163069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.517 [2024-10-01 15:58:56.163081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.517 [2024-10-01 15:58:56.163087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.517 [2024-10-01 15:58:56.163093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.517 [2024-10-01 15:58:56.163110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.517 [2024-10-01 15:58:56.173609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.517 [2024-10-01 15:58:56.173971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.517 [2024-10-01 15:58:56.173990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.517 [2024-10-01 15:58:56.173998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.517 [2024-10-01 15:58:56.174028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.517 [2024-10-01 15:58:56.174039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.517 [2024-10-01 15:58:56.174045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.517 [2024-10-01 15:58:56.174052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.517 [2024-10-01 15:58:56.174065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.517 [2024-10-01 15:58:56.184114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.517 [2024-10-01 15:58:56.184244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.517 [2024-10-01 15:58:56.184259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.517 [2024-10-01 15:58:56.184266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.517 [2024-10-01 15:58:56.184395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.517 [2024-10-01 15:58:56.184425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.517 [2024-10-01 15:58:56.184432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.517 [2024-10-01 15:58:56.184438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.517 [2024-10-01 15:58:56.184453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.517 [2024-10-01 15:58:56.194601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.517 [2024-10-01 15:58:56.194852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.517 [2024-10-01 15:58:56.194873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.517 [2024-10-01 15:58:56.194881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.517 [2024-10-01 15:58:56.194893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.517 [2024-10-01 15:58:56.194904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.517 [2024-10-01 15:58:56.194910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.517 [2024-10-01 15:58:56.194917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.517 [2024-10-01 15:58:56.194929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.517 [2024-10-01 15:58:56.205954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.517 [2024-10-01 15:58:56.206129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.517 [2024-10-01 15:58:56.206143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.517 [2024-10-01 15:58:56.206154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.518 [2024-10-01 15:58:56.206166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.518 [2024-10-01 15:58:56.206177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.518 [2024-10-01 15:58:56.206183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.518 [2024-10-01 15:58:56.206189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.518 [2024-10-01 15:58:56.206202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.518 [2024-10-01 15:58:56.218044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.518 [2024-10-01 15:58:56.218249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.518 [2024-10-01 15:58:56.218264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.518 [2024-10-01 15:58:56.218272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.518 [2024-10-01 15:58:56.218285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.518 [2024-10-01 15:58:56.218296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.518 [2024-10-01 15:58:56.218302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.518 [2024-10-01 15:58:56.218309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.518 [2024-10-01 15:58:56.218324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.518 [2024-10-01 15:58:56.229003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.518 [2024-10-01 15:58:56.229167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.518 [2024-10-01 15:58:56.229181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.518 [2024-10-01 15:58:56.229189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.518 [2024-10-01 15:58:56.229201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.518 [2024-10-01 15:58:56.229212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.518 [2024-10-01 15:58:56.229219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.518 [2024-10-01 15:58:56.229225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.518 [2024-10-01 15:58:56.229238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.518 [2024-10-01 15:58:56.240628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.518 [2024-10-01 15:58:56.240753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.518 [2024-10-01 15:58:56.240767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.518 [2024-10-01 15:58:56.240776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.518 [2024-10-01 15:58:56.240788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.518 [2024-10-01 15:58:56.240799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.518 [2024-10-01 15:58:56.240810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.518 [2024-10-01 15:58:56.240817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.518 [2024-10-01 15:58:56.240831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.518 [2024-10-01 15:58:56.251887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.518 [2024-10-01 15:58:56.252063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.518 [2024-10-01 15:58:56.252079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.518 [2024-10-01 15:58:56.252087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.518 [2024-10-01 15:58:56.252100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.518 [2024-10-01 15:58:56.252111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.518 [2024-10-01 15:58:56.252117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.518 [2024-10-01 15:58:56.252124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.518 [2024-10-01 15:58:56.252138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.518 [2024-10-01 15:58:56.263425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.518 [2024-10-01 15:58:56.263552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.518 [2024-10-01 15:58:56.263566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.518 [2024-10-01 15:58:56.263574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.518 [2024-10-01 15:58:56.263585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.518 [2024-10-01 15:58:56.263596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.518 [2024-10-01 15:58:56.263602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.518 [2024-10-01 15:58:56.263608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.518 [2024-10-01 15:58:56.263621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.518 [2024-10-01 15:58:56.274289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.518 [2024-10-01 15:58:56.274465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.518 [2024-10-01 15:58:56.274479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.518 [2024-10-01 15:58:56.274487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.518 [2024-10-01 15:58:56.274498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.518 [2024-10-01 15:58:56.274510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.518 [2024-10-01 15:58:56.274516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.518 [2024-10-01 15:58:56.274523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.518 [2024-10-01 15:58:56.274536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.518 [2024-10-01 15:58:56.285780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.518 [2024-10-01 15:58:56.286039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.518 [2024-10-01 15:58:56.286055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.518 [2024-10-01 15:58:56.286063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.518 [2024-10-01 15:58:56.286075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.518 [2024-10-01 15:58:56.286086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.518 [2024-10-01 15:58:56.286092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.518 [2024-10-01 15:58:56.286099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.518 [2024-10-01 15:58:56.286112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.518 [2024-10-01 15:58:56.296496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.518 [2024-10-01 15:58:56.296668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.518 [2024-10-01 15:58:56.296682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.518 [2024-10-01 15:58:56.296689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.518 [2024-10-01 15:58:56.296701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.518 [2024-10-01 15:58:56.296712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.518 [2024-10-01 15:58:56.296718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.518 [2024-10-01 15:58:56.296724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.518 [2024-10-01 15:58:56.296738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.518 [2024-10-01 15:58:56.307347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.518 [2024-10-01 15:58:56.307624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.518 [2024-10-01 15:58:56.307641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.518 [2024-10-01 15:58:56.307649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.518 [2024-10-01 15:58:56.307661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.518 [2024-10-01 15:58:56.307672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.518 [2024-10-01 15:58:56.307678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.518 [2024-10-01 15:58:56.307684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.518 [2024-10-01 15:58:56.307774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.518 [2024-10-01 15:58:56.319059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.518 [2024-10-01 15:58:56.319180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.518 [2024-10-01 15:58:56.319194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.518 [2024-10-01 15:58:56.319201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.518 [2024-10-01 15:58:56.319217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.518 [2024-10-01 15:58:56.319228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.518 [2024-10-01 15:58:56.319235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.518 [2024-10-01 15:58:56.319241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.518 [2024-10-01 15:58:56.319255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.518 [2024-10-01 15:58:56.329732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.518 [2024-10-01 15:58:56.329900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.518 [2024-10-01 15:58:56.329915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.518 [2024-10-01 15:58:56.329922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.519 [2024-10-01 15:58:56.329934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.519 [2024-10-01 15:58:56.329945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.519 [2024-10-01 15:58:56.329952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.519 [2024-10-01 15:58:56.329958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.519 [2024-10-01 15:58:56.329971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.519 [2024-10-01 15:58:56.342077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.519 [2024-10-01 15:58:56.342284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.519 [2024-10-01 15:58:56.342299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.519 [2024-10-01 15:58:56.342307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.519 [2024-10-01 15:58:56.342935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.519 [2024-10-01 15:58:56.343176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.519 [2024-10-01 15:58:56.343188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.519 [2024-10-01 15:58:56.343196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.519 [2024-10-01 15:58:56.343786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.519 [2024-10-01 15:58:56.353061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.519 [2024-10-01 15:58:56.353240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.519 [2024-10-01 15:58:56.353255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.519 [2024-10-01 15:58:56.353263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.519 [2024-10-01 15:58:56.353390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.519 [2024-10-01 15:58:56.353497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.519 [2024-10-01 15:58:56.353516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.519 [2024-10-01 15:58:56.353527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.519 [2024-10-01 15:58:56.353554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.519 [2024-10-01 15:58:56.363974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.519 [2024-10-01 15:58:56.364201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.519 [2024-10-01 15:58:56.364218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.519 [2024-10-01 15:58:56.364227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.519 [2024-10-01 15:58:56.364239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.519 [2024-10-01 15:58:56.364250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.519 [2024-10-01 15:58:56.364256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.519 [2024-10-01 15:58:56.364263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.519 [2024-10-01 15:58:56.364403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.519 [2024-10-01 15:58:56.375473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.519 [2024-10-01 15:58:56.375744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.519 [2024-10-01 15:58:56.375762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.519 [2024-10-01 15:58:56.375770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.519 [2024-10-01 15:58:56.375938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.519 [2024-10-01 15:58:56.376084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.519 [2024-10-01 15:58:56.376095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.519 [2024-10-01 15:58:56.376103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.519 [2024-10-01 15:58:56.376147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.519 [2024-10-01 15:58:56.386348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.519 [2024-10-01 15:58:56.386494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.519 [2024-10-01 15:58:56.386510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.519 [2024-10-01 15:58:56.386518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.519 [2024-10-01 15:58:56.386647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.519 [2024-10-01 15:58:56.386688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.519 [2024-10-01 15:58:56.386696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.519 [2024-10-01 15:58:56.386702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.519 [2024-10-01 15:58:56.386717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.519 [2024-10-01 15:58:56.397338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.519 [2024-10-01 15:58:56.397588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.519 [2024-10-01 15:58:56.397608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.519 [2024-10-01 15:58:56.397616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.519 [2024-10-01 15:58:56.397629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.519 [2024-10-01 15:58:56.397640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.519 [2024-10-01 15:58:56.397646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.519 [2024-10-01 15:58:56.397653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.519 [2024-10-01 15:58:56.397666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.519 [2024-10-01 15:58:56.410054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.519 [2024-10-01 15:58:56.410263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.519 [2024-10-01 15:58:56.410279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.519 [2024-10-01 15:58:56.410286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.519 [2024-10-01 15:58:56.410299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.519 [2024-10-01 15:58:56.410310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.519 [2024-10-01 15:58:56.410316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.519 [2024-10-01 15:58:56.410323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.519 [2024-10-01 15:58:56.410337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.519 [2024-10-01 15:58:56.421999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.519 [2024-10-01 15:58:56.422120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.519 [2024-10-01 15:58:56.422133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.519 [2024-10-01 15:58:56.422141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.519 [2024-10-01 15:58:56.422152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.519 [2024-10-01 15:58:56.422174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.519 [2024-10-01 15:58:56.422182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.519 [2024-10-01 15:58:56.422188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.519 [2024-10-01 15:58:56.422440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.519 [2024-10-01 15:58:56.433502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.519 [2024-10-01 15:58:56.433705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.519 [2024-10-01 15:58:56.433721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.519 [2024-10-01 15:58:56.433729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.519 [2024-10-01 15:58:56.433742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.519 [2024-10-01 15:58:56.433757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.519 [2024-10-01 15:58:56.433764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.519 [2024-10-01 15:58:56.433771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.519 [2024-10-01 15:58:56.433784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.519 [2024-10-01 15:58:56.444988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.519 [2024-10-01 15:58:56.445186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.519 [2024-10-01 15:58:56.445200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.519 [2024-10-01 15:58:56.445207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.519 [2024-10-01 15:58:56.445219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.519 [2024-10-01 15:58:56.445230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.519 [2024-10-01 15:58:56.445237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.519 [2024-10-01 15:58:56.445243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.519 [2024-10-01 15:58:56.445256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.519 [2024-10-01 15:58:56.455952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.519 [2024-10-01 15:58:56.456173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.519 [2024-10-01 15:58:56.456189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.519 [2024-10-01 15:58:56.456196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.519 [2024-10-01 15:58:56.456208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.520 [2024-10-01 15:58:56.456219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.520 [2024-10-01 15:58:56.456225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.520 [2024-10-01 15:58:56.456232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.520 [2024-10-01 15:58:56.456245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.520 [2024-10-01 15:58:56.467899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.520 [2024-10-01 15:58:56.468353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.520 [2024-10-01 15:58:56.468372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.520 [2024-10-01 15:58:56.468380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.520 [2024-10-01 15:58:56.468411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.520 [2024-10-01 15:58:56.468422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.520 [2024-10-01 15:58:56.468429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.520 [2024-10-01 15:58:56.468435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.520 [2024-10-01 15:58:56.468453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.520 [2024-10-01 15:58:56.478853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.520 [2024-10-01 15:58:56.479053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.520 [2024-10-01 15:58:56.479067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.520 [2024-10-01 15:58:56.479075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.520 [2024-10-01 15:58:56.479087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.520 [2024-10-01 15:58:56.479097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.520 [2024-10-01 15:58:56.479104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.520 [2024-10-01 15:58:56.479110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.520 [2024-10-01 15:58:56.479123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.520 [2024-10-01 15:58:56.489562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.520 [2024-10-01 15:58:56.489774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.520 [2024-10-01 15:58:56.489790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.520 [2024-10-01 15:58:56.489798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.520 [2024-10-01 15:58:56.489932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.520 [2024-10-01 15:58:56.489963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.520 [2024-10-01 15:58:56.489970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.520 [2024-10-01 15:58:56.489977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.520 [2024-10-01 15:58:56.489990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.520 [2024-10-01 15:58:56.500111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.520 [2024-10-01 15:58:56.500228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.520 [2024-10-01 15:58:56.500242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.520 [2024-10-01 15:58:56.500250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.520 [2024-10-01 15:58:56.500261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.520 [2024-10-01 15:58:56.500272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.520 [2024-10-01 15:58:56.500278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.520 [2024-10-01 15:58:56.500285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.520 [2024-10-01 15:58:56.500297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.520 [2024-10-01 15:58:56.510340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.520 [2024-10-01 15:58:56.510595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.520 [2024-10-01 15:58:56.510612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.520 [2024-10-01 15:58:56.510622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.520 [2024-10-01 15:58:56.510702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.520 [2024-10-01 15:58:56.512804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.520 [2024-10-01 15:58:56.512821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.520 [2024-10-01 15:58:56.512828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.520 [2024-10-01 15:58:56.513437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.520 [2024-10-01 15:58:56.521169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.520 [2024-10-01 15:58:56.524227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.520 [2024-10-01 15:58:56.524248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.520 [2024-10-01 15:58:56.524256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.520 [2024-10-01 15:58:56.524542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.520 [2024-10-01 15:58:56.525134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.520 [2024-10-01 15:58:56.525148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.520 [2024-10-01 15:58:56.525155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.520 [2024-10-01 15:58:56.525439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.520 [2024-10-01 15:58:56.531902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.520 [2024-10-01 15:58:56.532194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.520 [2024-10-01 15:58:56.532210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.520 [2024-10-01 15:58:56.532219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.520 [2024-10-01 15:58:56.532298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.520 [2024-10-01 15:58:56.534200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.520 [2024-10-01 15:58:56.534218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.520 [2024-10-01 15:58:56.534225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.520 [2024-10-01 15:58:56.534427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.520 [2024-10-01 15:58:56.543539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.520 [2024-10-01 15:58:56.543849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.520 [2024-10-01 15:58:56.543871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.520 [2024-10-01 15:58:56.543880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.520 [2024-10-01 15:58:56.546757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.520 [2024-10-01 15:58:56.547227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.520 [2024-10-01 15:58:56.547245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.520 [2024-10-01 15:58:56.547253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.520 [2024-10-01 15:58:56.547418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.520 [2024-10-01 15:58:56.558068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.520 [2024-10-01 15:58:56.558355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.520 [2024-10-01 15:58:56.558372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.520 [2024-10-01 15:58:56.558381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.520 [2024-10-01 15:58:56.558523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.520 [2024-10-01 15:58:56.558549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.520 [2024-10-01 15:58:56.558556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.520 [2024-10-01 15:58:56.558562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.520 [2024-10-01 15:58:56.558577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.520 [2024-10-01 15:58:56.568661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.520 [2024-10-01 15:58:56.568887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.520 [2024-10-01 15:58:56.568904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.520 [2024-10-01 15:58:56.568912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.520 [2024-10-01 15:58:56.568924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.520 [2024-10-01 15:58:56.568936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.520 [2024-10-01 15:58:56.568943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.520 [2024-10-01 15:58:56.568949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.520 [2024-10-01 15:58:56.568962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.520 [2024-10-01 15:58:56.581260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.520 [2024-10-01 15:58:56.581459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.520 [2024-10-01 15:58:56.581475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.520 [2024-10-01 15:58:56.581484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.520 [2024-10-01 15:58:56.581496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.520 [2024-10-01 15:58:56.581507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.520 [2024-10-01 15:58:56.581513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.521 [2024-10-01 15:58:56.581520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.521 [2024-10-01 15:58:56.581533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.521 [2024-10-01 15:58:56.594076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.521 [2024-10-01 15:58:56.594404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.521 [2024-10-01 15:58:56.594422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.521 [2024-10-01 15:58:56.594430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.521 [2024-10-01 15:58:56.594572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.521 [2024-10-01 15:58:56.594724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.521 [2024-10-01 15:58:56.594735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.521 [2024-10-01 15:58:56.594742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.521 [2024-10-01 15:58:56.594772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.521 [2024-10-01 15:58:56.600036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.521 [2024-10-01 15:58:56.600513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.521 [2024-10-01 15:58:56.600520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.522 [2024-10-01 15:58:56.600535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.522 [2024-10-01 15:58:56.600549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.522 [2024-10-01 15:58:56.600566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.522 [2024-10-01 15:58:56.600581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.522 [2024-10-01 15:58:56.600595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.522 [2024-10-01 15:58:56.600609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.522 [2024-10-01 15:58:56.600841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.600987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.600993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.601002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.601008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.601016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.601023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.601031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.601037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.601045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.601051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.601059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.601066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.601075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.601081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.601089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.601095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.601103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.601109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.601119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.601125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.601133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.601140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.601148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.522 [2024-10-01 15:58:56.601154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.522 [2024-10-01 15:58:56.601162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.523 [2024-10-01 15:58:56.601435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.523 [2024-10-01 15:58:56.601802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.523 [2024-10-01 15:58:56.601808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.524 [2024-10-01 15:58:56.601816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.524 [2024-10-01 15:58:56.601822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.524 [2024-10-01 15:58:56.601830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.524 [2024-10-01 15:58:56.601837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.524 [2024-10-01 15:58:56.601845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.524 [2024-10-01 15:58:56.601851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.524 [2024-10-01 15:58:56.601859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.524 [2024-10-01 15:58:56.601870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.524 [2024-10-01 15:58:56.601879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.524 [2024-10-01 15:58:56.601885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.524 [2024-10-01 15:58:56.601893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.524 [2024-10-01 15:58:56.601899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.524 [2024-10-01 15:58:56.601909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.524 [2024-10-01 15:58:56.601915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.524 [2024-10-01 15:58:56.601936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.524 [2024-10-01 15:58:56.601942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27448 len:8 PRP1 0x0 PRP2 0x0 00:24:57.524 [2024-10-01 15:58:56.601949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.524 [2024-10-01 15:58:56.601958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.524 [2024-10-01 15:58:56.601964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.524 [2024-10-01 15:58:56.601970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27456 len:8 PRP1 0x0 PRP2 0x0 00:24:57.524 [2024-10-01 15:58:56.601976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.524 [2024-10-01 15:58:56.602015] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x993460 was disconnected and freed. reset controller. 00:24:57.524 [2024-10-01 15:58:56.602879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.524 [2024-10-01 15:58:56.602920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.524 [2024-10-01 15:58:56.603057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.524 [2024-10-01 15:58:56.603070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.524 [2024-10-01 15:58:56.603078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.524 [2024-10-01 15:58:56.603089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.524 [2024-10-01 15:58:56.603099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.524 [2024-10-01 15:58:56.603106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.524 [2024-10-01 15:58:56.603112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.524 [2024-10-01 15:58:56.603126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.524 [2024-10-01 15:58:56.604152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.524 [2024-10-01 15:58:56.604324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.524 [2024-10-01 15:58:56.604338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.524 [2024-10-01 15:58:56.604345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.524 [2024-10-01 15:58:56.604359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.524 [2024-10-01 15:58:56.604370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.524 [2024-10-01 15:58:56.604376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.524 [2024-10-01 15:58:56.604383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.524 [2024-10-01 15:58:56.604395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.524 [2024-10-01 15:58:56.614387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.524 [2024-10-01 15:58:56.614417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.524 [2024-10-01 15:58:56.614655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.524 [2024-10-01 15:58:56.614668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.524 [2024-10-01 15:58:56.614676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.524 [2024-10-01 15:58:56.614841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.524 [2024-10-01 15:58:56.614850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.524 [2024-10-01 15:58:56.614857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.524 [2024-10-01 15:58:56.614870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.524 [2024-10-01 15:58:56.614883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.524 [2024-10-01 15:58:56.614890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.524 [2024-10-01 15:58:56.614896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.524 [2024-10-01 15:58:56.614903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.524 [2024-10-01 15:58:56.614917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.524 [2024-10-01 15:58:56.614924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.524 [2024-10-01 15:58:56.614929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.524 [2024-10-01 15:58:56.614936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.524 [2024-10-01 15:58:56.614948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.524 [2024-10-01 15:58:56.624453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.524 [2024-10-01 15:58:56.624723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.524 [2024-10-01 15:58:56.624739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.524 [2024-10-01 15:58:56.624746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.524 [2024-10-01 15:58:56.624765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.524 [2024-10-01 15:58:56.624778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.524 [2024-10-01 15:58:56.624791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.524 [2024-10-01 15:58:56.624801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.524 [2024-10-01 15:58:56.624807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.524 [2024-10-01 15:58:56.624819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.524 [2024-10-01 15:58:56.624982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.524 [2024-10-01 15:58:56.624993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.524 [2024-10-01 15:58:56.625000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.524 [2024-10-01 15:58:56.625011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.524 [2024-10-01 15:58:56.625021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.524 [2024-10-01 15:58:56.625027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.524 [2024-10-01 15:58:56.625033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.524 [2024-10-01 15:58:56.625045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.524 [2024-10-01 15:58:56.634518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.524 [2024-10-01 15:58:56.634763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.524 [2024-10-01 15:58:56.634778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.524 [2024-10-01 15:58:56.634786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.524 [2024-10-01 15:58:56.634797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.524 [2024-10-01 15:58:56.634808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.524 [2024-10-01 15:58:56.634814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.524 [2024-10-01 15:58:56.634820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.524 [2024-10-01 15:58:56.634833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.524 [2024-10-01 15:58:56.634853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.524 [2024-10-01 15:58:56.635016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.524 [2024-10-01 15:58:56.635028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.524 [2024-10-01 15:58:56.635034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.524 [2024-10-01 15:58:56.635751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.524 [2024-10-01 15:58:56.635897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.524 [2024-10-01 15:58:56.635907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.524 [2024-10-01 15:58:56.635913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.524 [2024-10-01 15:58:56.635928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.524 [2024-10-01 15:58:56.645236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.524 [2024-10-01 15:58:56.645282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.525 [2024-10-01 15:58:56.645520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-10-01 15:58:56.645533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.525 [2024-10-01 15:58:56.645540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.525 [2024-10-01 15:58:56.646032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-10-01 15:58:56.646049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.525 [2024-10-01 15:58:56.646056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.525 [2024-10-01 15:58:56.646066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.525 [2024-10-01 15:58:56.646221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.525 [2024-10-01 15:58:56.646231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.525 [2024-10-01 15:58:56.646238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.525 [2024-10-01 15:58:56.646244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.525 [2024-10-01 15:58:56.646274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.525 [2024-10-01 15:58:56.646281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.525 [2024-10-01 15:58:56.646287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.525 [2024-10-01 15:58:56.646293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.525 [2024-10-01 15:58:56.646306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.525 [2024-10-01 15:58:56.655300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.525 [2024-10-01 15:58:56.655547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-10-01 15:58:56.655562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.525 [2024-10-01 15:58:56.655569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.525 [2024-10-01 15:58:56.655589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.525 [2024-10-01 15:58:56.655602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.525 [2024-10-01 15:58:56.655615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.525 [2024-10-01 15:58:56.655622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.525 [2024-10-01 15:58:56.655629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.525 [2024-10-01 15:58:56.655640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.525 [2024-10-01 15:58:56.655790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-10-01 15:58:56.655800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.525 [2024-10-01 15:58:56.655807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.525 [2024-10-01 15:58:56.655819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.525 [2024-10-01 15:58:56.655833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.525 [2024-10-01 15:58:56.655839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.525 [2024-10-01 15:58:56.655844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.525 [2024-10-01 15:58:56.655856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.525 [2024-10-01 15:58:56.667748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.525 [2024-10-01 15:58:56.667770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.525 [2024-10-01 15:58:56.668086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-10-01 15:58:56.668103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.525 [2024-10-01 15:58:56.668111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.525 [2024-10-01 15:58:56.668258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-10-01 15:58:56.668267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.525 [2024-10-01 15:58:56.668274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.525 [2024-10-01 15:58:56.668416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.525 [2024-10-01 15:58:56.668429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.525 [2024-10-01 15:58:56.668576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.525 [2024-10-01 15:58:56.668586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.525 [2024-10-01 15:58:56.668593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.525 [2024-10-01 15:58:56.668602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.525 [2024-10-01 15:58:56.668609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.525 [2024-10-01 15:58:56.668615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.525 [2024-10-01 15:58:56.668644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.525 [2024-10-01 15:58:56.668652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.525 [2024-10-01 15:58:56.678954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.525 [2024-10-01 15:58:56.678975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.525 [2024-10-01 15:58:56.679184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-10-01 15:58:56.679196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.525 [2024-10-01 15:58:56.679204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.525 [2024-10-01 15:58:56.679395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-10-01 15:58:56.679405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.525 [2024-10-01 15:58:56.679412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.525 [2024-10-01 15:58:56.679612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.525 [2024-10-01 15:58:56.679625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.525 [2024-10-01 15:58:56.679718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.525 [2024-10-01 15:58:56.679726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.525 [2024-10-01 15:58:56.679733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.525 [2024-10-01 15:58:56.679742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.525 [2024-10-01 15:58:56.679748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.525 [2024-10-01 15:58:56.679754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.525 [2024-10-01 15:58:56.679775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.525 [2024-10-01 15:58:56.679782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.525 [2024-10-01 15:58:56.689851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.525 [2024-10-01 15:58:56.689878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.525 [2024-10-01 15:58:56.690133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-10-01 15:58:56.690147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.525 [2024-10-01 15:58:56.690155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.525 [2024-10-01 15:58:56.690375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.525 [2024-10-01 15:58:56.690386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.525 [2024-10-01 15:58:56.690393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.525 [2024-10-01 15:58:56.690405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.525 [2024-10-01 15:58:56.690414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.525 [2024-10-01 15:58:56.690433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.525 [2024-10-01 15:58:56.690440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.525 [2024-10-01 15:58:56.690446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.525 [2024-10-01 15:58:56.690455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.525 [2024-10-01 15:58:56.690462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.525 [2024-10-01 15:58:56.690468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.525 [2024-10-01 15:58:56.690482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.525 [2024-10-01 15:58:56.690488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.526 [2024-10-01 15:58:56.700116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.526 [2024-10-01 15:58:56.700137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.526 [2024-10-01 15:58:56.700296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-10-01 15:58:56.700312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.526 [2024-10-01 15:58:56.700319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.526 [2024-10-01 15:58:56.700532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-10-01 15:58:56.700542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.526 [2024-10-01 15:58:56.700548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.526 [2024-10-01 15:58:56.700560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.526 [2024-10-01 15:58:56.700569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.526 [2024-10-01 15:58:56.700579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.526 [2024-10-01 15:58:56.700585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.526 [2024-10-01 15:58:56.700591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.526 [2024-10-01 15:58:56.700600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.526 [2024-10-01 15:58:56.700606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.526 [2024-10-01 15:58:56.700612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.526 [2024-10-01 15:58:56.700625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.526 [2024-10-01 15:58:56.700632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.526 [2024-10-01 15:58:56.711441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.526 [2024-10-01 15:58:56.711461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.526 [2024-10-01 15:58:56.711678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-10-01 15:58:56.711690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.526 [2024-10-01 15:58:56.711697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.526 [2024-10-01 15:58:56.711899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-10-01 15:58:56.711921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.526 [2024-10-01 15:58:56.711928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.526 [2024-10-01 15:58:56.712895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.526 [2024-10-01 15:58:56.712911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.526 [2024-10-01 15:58:56.713146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.526 [2024-10-01 15:58:56.713156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.526 [2024-10-01 15:58:56.713162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.526 [2024-10-01 15:58:56.713172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.526 [2024-10-01 15:58:56.713178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.526 [2024-10-01 15:58:56.713187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.526 [2024-10-01 15:58:56.713339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.526 [2024-10-01 15:58:56.713348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.526 [2024-10-01 15:58:56.723008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.526 [2024-10-01 15:58:56.723028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.526 [2024-10-01 15:58:56.723241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-10-01 15:58:56.723254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.526 [2024-10-01 15:58:56.723261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.526 [2024-10-01 15:58:56.723398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-10-01 15:58:56.723409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.526 [2024-10-01 15:58:56.723415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.526 [2024-10-01 15:58:56.723427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.526 [2024-10-01 15:58:56.723436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.526 [2024-10-01 15:58:56.723446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.526 [2024-10-01 15:58:56.723453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.526 [2024-10-01 15:58:56.723459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.526 [2024-10-01 15:58:56.723468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.526 [2024-10-01 15:58:56.723474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.526 [2024-10-01 15:58:56.723480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.526 [2024-10-01 15:58:56.723493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.526 [2024-10-01 15:58:56.723500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.526 [2024-10-01 15:58:56.735246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.526 [2024-10-01 15:58:56.735267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.526 [2024-10-01 15:58:56.735455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-10-01 15:58:56.735467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.526 [2024-10-01 15:58:56.735475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.526 [2024-10-01 15:58:56.735696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-10-01 15:58:56.735707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.526 [2024-10-01 15:58:56.735713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.526 [2024-10-01 15:58:56.735725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.526 [2024-10-01 15:58:56.735748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.526 [2024-10-01 15:58:56.735765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.526 [2024-10-01 15:58:56.735772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.526 [2024-10-01 15:58:56.735779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.526 [2024-10-01 15:58:56.735787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.526 [2024-10-01 15:58:56.735793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.526 [2024-10-01 15:58:56.735799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.526 [2024-10-01 15:58:56.735812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.526 [2024-10-01 15:58:56.735819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.526 [2024-10-01 15:58:56.748026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.526 [2024-10-01 15:58:56.748047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.526 [2024-10-01 15:58:56.748214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-10-01 15:58:56.748226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.526 [2024-10-01 15:58:56.748234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.526 [2024-10-01 15:58:56.748385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-10-01 15:58:56.748395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.526 [2024-10-01 15:58:56.748402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.526 [2024-10-01 15:58:56.748413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.526 [2024-10-01 15:58:56.748422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.526 [2024-10-01 15:58:56.748432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.526 [2024-10-01 15:58:56.748438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.526 [2024-10-01 15:58:56.748445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.526 [2024-10-01 15:58:56.748453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.526 [2024-10-01 15:58:56.748459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.526 [2024-10-01 15:58:56.748465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.526 [2024-10-01 15:58:56.748479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.526 [2024-10-01 15:58:56.748485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.526 [2024-10-01 15:58:56.760110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.526 [2024-10-01 15:58:56.760130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.526 [2024-10-01 15:58:56.760392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-10-01 15:58:56.760405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.526 [2024-10-01 15:58:56.760416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.526 [2024-10-01 15:58:56.760557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.526 [2024-10-01 15:58:56.760566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.526 [2024-10-01 15:58:56.760573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.526 [2024-10-01 15:58:56.761034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.527 [2024-10-01 15:58:56.761048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.527 [2024-10-01 15:58:56.761216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.527 [2024-10-01 15:58:56.761226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.527 [2024-10-01 15:58:56.761233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.527 [2024-10-01 15:58:56.761243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.527 [2024-10-01 15:58:56.761249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.527 [2024-10-01 15:58:56.761255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.527 [2024-10-01 15:58:56.761286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.527 [2024-10-01 15:58:56.761293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.527 [2024-10-01 15:58:56.770932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.527 [2024-10-01 15:58:56.770952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.527 [2024-10-01 15:58:56.771116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-10-01 15:58:56.771129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.527 [2024-10-01 15:58:56.771136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.527 [2024-10-01 15:58:56.771354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-10-01 15:58:56.771364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.527 [2024-10-01 15:58:56.771371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.527 [2024-10-01 15:58:56.771382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.527 [2024-10-01 15:58:56.771391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.527 [2024-10-01 15:58:56.771401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.527 [2024-10-01 15:58:56.771407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.527 [2024-10-01 15:58:56.771414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.527 [2024-10-01 15:58:56.771422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.527 [2024-10-01 15:58:56.771428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.527 [2024-10-01 15:58:56.771434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.527 [2024-10-01 15:58:56.771450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.527 [2024-10-01 15:58:56.771457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.527 [2024-10-01 15:58:56.783570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.527 [2024-10-01 15:58:56.783592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.527 [2024-10-01 15:58:56.783827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-10-01 15:58:56.783840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.527 [2024-10-01 15:58:56.783848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.527 [2024-10-01 15:58:56.784081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-10-01 15:58:56.784092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.527 [2024-10-01 15:58:56.784098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.527 [2024-10-01 15:58:56.784111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.527 [2024-10-01 15:58:56.784120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.527 [2024-10-01 15:58:56.784130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.527 [2024-10-01 15:58:56.784136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.527 [2024-10-01 15:58:56.784142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.527 [2024-10-01 15:58:56.784150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.527 [2024-10-01 15:58:56.784156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.527 [2024-10-01 15:58:56.784162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.527 [2024-10-01 15:58:56.784176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.527 [2024-10-01 15:58:56.784183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.527 [2024-10-01 15:58:56.795501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.527 [2024-10-01 15:58:56.795522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.527 [2024-10-01 15:58:56.795772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-10-01 15:58:56.795786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.527 [2024-10-01 15:58:56.795794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.527 [2024-10-01 15:58:56.795988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-10-01 15:58:56.796000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.527 [2024-10-01 15:58:56.796007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.527 [2024-10-01 15:58:56.796199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.527 [2024-10-01 15:58:56.796212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.527 [2024-10-01 15:58:56.796302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.527 [2024-10-01 15:58:56.796311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.527 [2024-10-01 15:58:56.796317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.527 [2024-10-01 15:58:56.796326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.527 [2024-10-01 15:58:56.796332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.527 [2024-10-01 15:58:56.796338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.527 [2024-10-01 15:58:56.796351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.527 [2024-10-01 15:58:56.796358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.527 [2024-10-01 15:58:56.806078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.527 [2024-10-01 15:58:56.806100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.527 [2024-10-01 15:58:56.806264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-10-01 15:58:56.806277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.527 [2024-10-01 15:58:56.806285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.527 [2024-10-01 15:58:56.806478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-10-01 15:58:56.806488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.527 [2024-10-01 15:58:56.806495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.527 [2024-10-01 15:58:56.806506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.527 [2024-10-01 15:58:56.806516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.527 [2024-10-01 15:58:56.806526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.527 [2024-10-01 15:58:56.806532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.527 [2024-10-01 15:58:56.806538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.527 [2024-10-01 15:58:56.806547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.527 [2024-10-01 15:58:56.806552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.527 [2024-10-01 15:58:56.806559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.527 [2024-10-01 15:58:56.806572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.527 [2024-10-01 15:58:56.806579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.527 [2024-10-01 15:58:56.816335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.527 [2024-10-01 15:58:56.816356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.527 [2024-10-01 15:58:56.816578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-10-01 15:58:56.816591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.527 [2024-10-01 15:58:56.816598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.527 [2024-10-01 15:58:56.816820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.527 [2024-10-01 15:58:56.816831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.527 [2024-10-01 15:58:56.816838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.527 [2024-10-01 15:58:56.816849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.527 [2024-10-01 15:58:56.816858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.527 [2024-10-01 15:58:56.816874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.527 [2024-10-01 15:58:56.816880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.527 [2024-10-01 15:58:56.816886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.527 [2024-10-01 15:58:56.816895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.527 [2024-10-01 15:58:56.816900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.527 [2024-10-01 15:58:56.816906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.527 [2024-10-01 15:58:56.816919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.527 [2024-10-01 15:58:56.816926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.527 [2024-10-01 15:58:56.826944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.528 [2024-10-01 15:58:56.826967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.528 [2024-10-01 15:58:56.827177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-10-01 15:58:56.827191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.528 [2024-10-01 15:58:56.827198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.528 [2024-10-01 15:58:56.827275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-10-01 15:58:56.827284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.528 [2024-10-01 15:58:56.827291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.528 [2024-10-01 15:58:56.827303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.528 [2024-10-01 15:58:56.827313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.528 [2024-10-01 15:58:56.827322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.528 [2024-10-01 15:58:56.827329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.528 [2024-10-01 15:58:56.827335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.528 [2024-10-01 15:58:56.827344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.528 [2024-10-01 15:58:56.827350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.528 [2024-10-01 15:58:56.827356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.528 [2024-10-01 15:58:56.827496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.528 [2024-10-01 15:58:56.827510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.528 [2024-10-01 15:58:56.838113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.528 [2024-10-01 15:58:56.838134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.528 [2024-10-01 15:58:56.838302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-10-01 15:58:56.838316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.528 [2024-10-01 15:58:56.838323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.528 [2024-10-01 15:58:56.838448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-10-01 15:58:56.838458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.528 [2024-10-01 15:58:56.838464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.528 [2024-10-01 15:58:56.838803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.528 [2024-10-01 15:58:56.838816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.528 [2024-10-01 15:58:56.838981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.528 [2024-10-01 15:58:56.838991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.528 [2024-10-01 15:58:56.838998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.528 [2024-10-01 15:58:56.839008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.528 [2024-10-01 15:58:56.839014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.528 [2024-10-01 15:58:56.839020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.528 [2024-10-01 15:58:56.839194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.528 [2024-10-01 15:58:56.839205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.528 [2024-10-01 15:58:56.849988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.528 [2024-10-01 15:58:56.850010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.528 [2024-10-01 15:58:56.850409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-10-01 15:58:56.850425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.528 [2024-10-01 15:58:56.850433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.528 [2024-10-01 15:58:56.850626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-10-01 15:58:56.850637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.528 [2024-10-01 15:58:56.850644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.528 [2024-10-01 15:58:56.850901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.528 [2024-10-01 15:58:56.850915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.528 [2024-10-01 15:58:56.851063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.528 [2024-10-01 15:58:56.851078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.528 [2024-10-01 15:58:56.851084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.528 [2024-10-01 15:58:56.851093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.528 [2024-10-01 15:58:56.851099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.528 [2024-10-01 15:58:56.851105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.528 [2024-10-01 15:58:56.851135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.528 [2024-10-01 15:58:56.851142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.528 [2024-10-01 15:58:56.862478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.528 [2024-10-01 15:58:56.862499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.528 [2024-10-01 15:58:56.862725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-10-01 15:58:56.862738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.528 [2024-10-01 15:58:56.862745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.528 [2024-10-01 15:58:56.862960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.528 [2024-10-01 15:58:56.862972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.528 [2024-10-01 15:58:56.862978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.528 [2024-10-01 15:58:56.862999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.528 [2024-10-01 15:58:56.863009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.528 [2024-10-01 15:58:56.863018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.528 [2024-10-01 15:58:56.863024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.528 [2024-10-01 15:58:56.863031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.528 [2024-10-01 15:58:56.863039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.529 [2024-10-01 15:58:56.863045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.529 [2024-10-01 15:58:56.863051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.529 [2024-10-01 15:58:56.863065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.529 [2024-10-01 15:58:56.863072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.529 [2024-10-01 15:58:56.874227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.529 [2024-10-01 15:58:56.874249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.529 [2024-10-01 15:58:56.874623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-10-01 15:58:56.874639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.529 [2024-10-01 15:58:56.874647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.529 [2024-10-01 15:58:56.874844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-10-01 15:58:56.874858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.529 [2024-10-01 15:58:56.874870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.529 [2024-10-01 15:58:56.875046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.529 [2024-10-01 15:58:56.875060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.529 [2024-10-01 15:58:56.875201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.529 [2024-10-01 15:58:56.875211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.529 [2024-10-01 15:58:56.875217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.529 [2024-10-01 15:58:56.875227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.529 [2024-10-01 15:58:56.875233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.529 [2024-10-01 15:58:56.875239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.529 [2024-10-01 15:58:56.875270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.529 [2024-10-01 15:58:56.875277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.529 [2024-10-01 15:58:56.884791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.529 [2024-10-01 15:58:56.884811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.529 [2024-10-01 15:58:56.884985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-10-01 15:58:56.884998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.529 [2024-10-01 15:58:56.885006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.529 [2024-10-01 15:58:56.885136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-10-01 15:58:56.885146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.529 [2024-10-01 15:58:56.885152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.529 [2024-10-01 15:58:56.885164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.529 [2024-10-01 15:58:56.885172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.529 [2024-10-01 15:58:56.885182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.529 [2024-10-01 15:58:56.885189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.529 [2024-10-01 15:58:56.885195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.529 [2024-10-01 15:58:56.885203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.529 [2024-10-01 15:58:56.885209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.529 [2024-10-01 15:58:56.885215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.529 [2024-10-01 15:58:56.885228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.529 [2024-10-01 15:58:56.885235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.529 [2024-10-01 15:58:56.896676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.529 [2024-10-01 15:58:56.896698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.529 [2024-10-01 15:58:56.897040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-10-01 15:58:56.897057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.529 [2024-10-01 15:58:56.897064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.529 [2024-10-01 15:58:56.897258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-10-01 15:58:56.897268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.529 [2024-10-01 15:58:56.897275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.529 [2024-10-01 15:58:56.897527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.529 [2024-10-01 15:58:56.897540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.529 [2024-10-01 15:58:56.897688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.529 [2024-10-01 15:58:56.897698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.529 [2024-10-01 15:58:56.897705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.529 [2024-10-01 15:58:56.897714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.529 [2024-10-01 15:58:56.897720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.529 [2024-10-01 15:58:56.897726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.529 [2024-10-01 15:58:56.897755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.529 [2024-10-01 15:58:56.897763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.529 [2024-10-01 15:58:56.908550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.529 [2024-10-01 15:58:56.908571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.529 [2024-10-01 15:58:56.908732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-10-01 15:58:56.908745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.529 [2024-10-01 15:58:56.908752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.529 [2024-10-01 15:58:56.908919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-10-01 15:58:56.908930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.529 [2024-10-01 15:58:56.908937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.529 [2024-10-01 15:58:56.908949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.529 [2024-10-01 15:58:56.908959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.529 [2024-10-01 15:58:56.908969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.529 [2024-10-01 15:58:56.908976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.529 [2024-10-01 15:58:56.908985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.529 [2024-10-01 15:58:56.908994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.529 [2024-10-01 15:58:56.909000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.529 [2024-10-01 15:58:56.909006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.529 [2024-10-01 15:58:56.909019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.529 [2024-10-01 15:58:56.909026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.529 [2024-10-01 15:58:56.921600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.529 [2024-10-01 15:58:56.921622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.529 [2024-10-01 15:58:56.921780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-10-01 15:58:56.921793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.529 [2024-10-01 15:58:56.921800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.529 [2024-10-01 15:58:56.921923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.529 [2024-10-01 15:58:56.921933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.529 [2024-10-01 15:58:56.921940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.529 [2024-10-01 15:58:56.921952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.529 [2024-10-01 15:58:56.921961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.529 [2024-10-01 15:58:56.921971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.529 [2024-10-01 15:58:56.921977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.529 [2024-10-01 15:58:56.921983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.529 [2024-10-01 15:58:56.921991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.529 [2024-10-01 15:58:56.921997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.529 [2024-10-01 15:58:56.922004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.530 [2024-10-01 15:58:56.922018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.530 [2024-10-01 15:58:56.922024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.530 [2024-10-01 15:58:56.932393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.530 [2024-10-01 15:58:56.932414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.530 [2024-10-01 15:58:56.932603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-10-01 15:58:56.932616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.530 [2024-10-01 15:58:56.932624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.530 [2024-10-01 15:58:56.932814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-10-01 15:58:56.932824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.530 [2024-10-01 15:58:56.932834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.530 [2024-10-01 15:58:56.933062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.530 [2024-10-01 15:58:56.933076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.530 [2024-10-01 15:58:56.933220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.530 [2024-10-01 15:58:56.933231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.530 [2024-10-01 15:58:56.933237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.530 [2024-10-01 15:58:56.933246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.530 [2024-10-01 15:58:56.933253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.530 [2024-10-01 15:58:56.933259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.530 [2024-10-01 15:58:56.933400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.530 [2024-10-01 15:58:56.933410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.530 [2024-10-01 15:58:56.943463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.530 [2024-10-01 15:58:56.943483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.530 [2024-10-01 15:58:56.943653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-10-01 15:58:56.943665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.530 [2024-10-01 15:58:56.943673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.530 [2024-10-01 15:58:56.943813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-10-01 15:58:56.943823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.530 [2024-10-01 15:58:56.943830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.530 [2024-10-01 15:58:56.944176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.530 [2024-10-01 15:58:56.944190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.530 [2024-10-01 15:58:56.944348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.530 [2024-10-01 15:58:56.944358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.530 [2024-10-01 15:58:56.944364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.530 [2024-10-01 15:58:56.944374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.530 [2024-10-01 15:58:56.944380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.530 [2024-10-01 15:58:56.944386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.530 [2024-10-01 15:58:56.944559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.530 [2024-10-01 15:58:56.944568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.530 [2024-10-01 15:58:56.954807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.530 [2024-10-01 15:58:56.954832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.530 [2024-10-01 15:58:56.955071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-10-01 15:58:56.955084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.530 [2024-10-01 15:58:56.955092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.530 [2024-10-01 15:58:56.955231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-10-01 15:58:56.955240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.530 [2024-10-01 15:58:56.955247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.530 [2024-10-01 15:58:56.955692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.530 [2024-10-01 15:58:56.955706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.530 [2024-10-01 15:58:56.955910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.530 [2024-10-01 15:58:56.955922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.530 [2024-10-01 15:58:56.955929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.530 [2024-10-01 15:58:56.955938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.530 [2024-10-01 15:58:56.955944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.530 [2024-10-01 15:58:56.955950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.530 [2024-10-01 15:58:56.955983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.530 [2024-10-01 15:58:56.955991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.530 [2024-10-01 15:58:56.966486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.530 [2024-10-01 15:58:56.966506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.530 [2024-10-01 15:58:56.966769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-10-01 15:58:56.966786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.530 [2024-10-01 15:58:56.966793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.530 [2024-10-01 15:58:56.966941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-10-01 15:58:56.966951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.530 [2024-10-01 15:58:56.966958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.530 [2024-10-01 15:58:56.967133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.530 [2024-10-01 15:58:56.967146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.530 [2024-10-01 15:58:56.967174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.530 [2024-10-01 15:58:56.967182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.530 [2024-10-01 15:58:56.967188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.530 [2024-10-01 15:58:56.967197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.530 [2024-10-01 15:58:56.967209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.530 [2024-10-01 15:58:56.967215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.530 [2024-10-01 15:58:56.967344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.530 [2024-10-01 15:58:56.967353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.530 11338.00 IOPS, 44.29 MiB/s [2024-10-01 15:58:56.978797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.530 [2024-10-01 15:58:56.978815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.530 [2024-10-01 15:58:56.978977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.530 [2024-10-01 15:58:56.978990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.531 [2024-10-01 15:58:56.978997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.531 [2024-10-01 15:58:56.979194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-10-01 15:58:56.979205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.531 [2024-10-01 15:58:56.979211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.531 [2024-10-01 15:58:56.980098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.531 [2024-10-01 15:58:56.980113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.531 [2024-10-01 15:58:56.980205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.531 [2024-10-01 15:58:56.980212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.531 [2024-10-01 15:58:56.980219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.531 [2024-10-01 15:58:56.980228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.531 [2024-10-01 15:58:56.980234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.531 [2024-10-01 15:58:56.980240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.531 [2024-10-01 15:58:56.980254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.531 [2024-10-01 15:58:56.980260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.531 [2024-10-01 15:58:56.988994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.531 [2024-10-01 15:58:56.989043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.531 [2024-10-01 15:58:56.989277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-10-01 15:58:56.989290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.531 [2024-10-01 15:58:56.989298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.531 [2024-10-01 15:58:56.989511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-10-01 15:58:56.989524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.531 [2024-10-01 15:58:56.989531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.531 [2024-10-01 15:58:56.989544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.531 [2024-10-01 15:58:56.989688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.531 [2024-10-01 15:58:56.989698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.531 [2024-10-01 15:58:56.989704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.531 [2024-10-01 15:58:56.989711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.531 [2024-10-01 15:58:56.989740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.531 [2024-10-01 15:58:56.989748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.531 [2024-10-01 15:58:56.989753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.531 [2024-10-01 15:58:56.989760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.531 [2024-10-01 15:58:56.989772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.531 [2024-10-01 15:58:57.001526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.531 [2024-10-01 15:58:57.001548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.531 [2024-10-01 15:58:57.001912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-10-01 15:58:57.001928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.531 [2024-10-01 15:58:57.001936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.531 [2024-10-01 15:58:57.002130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-10-01 15:58:57.002140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.531 [2024-10-01 15:58:57.002147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.531 [2024-10-01 15:58:57.002398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.531 [2024-10-01 15:58:57.002412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.531 [2024-10-01 15:58:57.002560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.531 [2024-10-01 15:58:57.002570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.531 [2024-10-01 15:58:57.002577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.531 [2024-10-01 15:58:57.002586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.531 [2024-10-01 15:58:57.002592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.531 [2024-10-01 15:58:57.002598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.531 [2024-10-01 15:58:57.002624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.531 [2024-10-01 15:58:57.002631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.531 [2024-10-01 15:58:57.012442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.531 [2024-10-01 15:58:57.012462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.531 [2024-10-01 15:58:57.012621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-10-01 15:58:57.012633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.531 [2024-10-01 15:58:57.012640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.531 [2024-10-01 15:58:57.012717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-10-01 15:58:57.012727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.531 [2024-10-01 15:58:57.012734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.531 [2024-10-01 15:58:57.012745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.531 [2024-10-01 15:58:57.012754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.531 [2024-10-01 15:58:57.012764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.531 [2024-10-01 15:58:57.012770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.531 [2024-10-01 15:58:57.012777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.531 [2024-10-01 15:58:57.012786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.531 [2024-10-01 15:58:57.012792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.531 [2024-10-01 15:58:57.012798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.531 [2024-10-01 15:58:57.012811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.531 [2024-10-01 15:58:57.012818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.531 [2024-10-01 15:58:57.025218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.531 [2024-10-01 15:58:57.025239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.531 [2024-10-01 15:58:57.025403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-10-01 15:58:57.025416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.531 [2024-10-01 15:58:57.025423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.531 [2024-10-01 15:58:57.025505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-10-01 15:58:57.025515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.531 [2024-10-01 15:58:57.025521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.531 [2024-10-01 15:58:57.025534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.531 [2024-10-01 15:58:57.025543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.531 [2024-10-01 15:58:57.025552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.531 [2024-10-01 15:58:57.025558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.531 [2024-10-01 15:58:57.025565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.531 [2024-10-01 15:58:57.025574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.531 [2024-10-01 15:58:57.025583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.531 [2024-10-01 15:58:57.025589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.531 [2024-10-01 15:58:57.025602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.531 [2024-10-01 15:58:57.025609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.531 [2024-10-01 15:58:57.037438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.531 [2024-10-01 15:58:57.037460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.531 [2024-10-01 15:58:57.037668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-10-01 15:58:57.037681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.531 [2024-10-01 15:58:57.037688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.531 [2024-10-01 15:58:57.037859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.531 [2024-10-01 15:58:57.037876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.532 [2024-10-01 15:58:57.037883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.532 [2024-10-01 15:58:57.038144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.532 [2024-10-01 15:58:57.038158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.532 [2024-10-01 15:58:57.038201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.532 [2024-10-01 15:58:57.038210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.532 [2024-10-01 15:58:57.038216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.532 [2024-10-01 15:58:57.038225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.532 [2024-10-01 15:58:57.038231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.532 [2024-10-01 15:58:57.038238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.532 [2024-10-01 15:58:57.038426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.532 [2024-10-01 15:58:57.038436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.532 [2024-10-01 15:58:57.048374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.532 [2024-10-01 15:58:57.048394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.532 [2024-10-01 15:58:57.048548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-10-01 15:58:57.048561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.532 [2024-10-01 15:58:57.048568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.532 [2024-10-01 15:58:57.048714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-10-01 15:58:57.048724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.532 [2024-10-01 15:58:57.048731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.532 [2024-10-01 15:58:57.048742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.532 [2024-10-01 15:58:57.048754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.532 [2024-10-01 15:58:57.048764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.532 [2024-10-01 15:58:57.048770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.532 [2024-10-01 15:58:57.048776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.532 [2024-10-01 15:58:57.048784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.532 [2024-10-01 15:58:57.048790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.532 [2024-10-01 15:58:57.048797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.532 [2024-10-01 15:58:57.048810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.532 [2024-10-01 15:58:57.048816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.532 [2024-10-01 15:58:57.059330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.532 [2024-10-01 15:58:57.059350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.532 [2024-10-01 15:58:57.059580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-10-01 15:58:57.059591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.532 [2024-10-01 15:58:57.059599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.532 [2024-10-01 15:58:57.059686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-10-01 15:58:57.059696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.532 [2024-10-01 15:58:57.059702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.532 [2024-10-01 15:58:57.059713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.532 [2024-10-01 15:58:57.059723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.532 [2024-10-01 15:58:57.059733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.532 [2024-10-01 15:58:57.059739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.532 [2024-10-01 15:58:57.059745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.532 [2024-10-01 15:58:57.059754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.532 [2024-10-01 15:58:57.059759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.532 [2024-10-01 15:58:57.059766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.532 [2024-10-01 15:58:57.059779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.532 [2024-10-01 15:58:57.059785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.532 [2024-10-01 15:58:57.071225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.532 [2024-10-01 15:58:57.071246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.532 [2024-10-01 15:58:57.071484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-10-01 15:58:57.071497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.532 [2024-10-01 15:58:57.071509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.532 [2024-10-01 15:58:57.071597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-10-01 15:58:57.071606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.532 [2024-10-01 15:58:57.071613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.532 [2024-10-01 15:58:57.071624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.532 [2024-10-01 15:58:57.071633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.532 [2024-10-01 15:58:57.071643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.532 [2024-10-01 15:58:57.071649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.532 [2024-10-01 15:58:57.071656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.532 [2024-10-01 15:58:57.071664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.532 [2024-10-01 15:58:57.071670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.532 [2024-10-01 15:58:57.071676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.532 [2024-10-01 15:58:57.071689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.532 [2024-10-01 15:58:57.071696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.532 [2024-10-01 15:58:57.083080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.532 [2024-10-01 15:58:57.083102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.532 [2024-10-01 15:58:57.083426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-10-01 15:58:57.083442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.532 [2024-10-01 15:58:57.083450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.532 [2024-10-01 15:58:57.083671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-10-01 15:58:57.083682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.532 [2024-10-01 15:58:57.083689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.532 [2024-10-01 15:58:57.084045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.532 [2024-10-01 15:58:57.084061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.532 [2024-10-01 15:58:57.084213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.532 [2024-10-01 15:58:57.084224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.532 [2024-10-01 15:58:57.084230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.532 [2024-10-01 15:58:57.084240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.532 [2024-10-01 15:58:57.084246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.532 [2024-10-01 15:58:57.084256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.532 [2024-10-01 15:58:57.084398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.532 [2024-10-01 15:58:57.084408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.532 [2024-10-01 15:58:57.093984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.532 [2024-10-01 15:58:57.094005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.532 [2024-10-01 15:58:57.094169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-10-01 15:58:57.094181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.532 [2024-10-01 15:58:57.094188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.532 [2024-10-01 15:58:57.094382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.532 [2024-10-01 15:58:57.094391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.532 [2024-10-01 15:58:57.094398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.532 [2024-10-01 15:58:57.094652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.533 [2024-10-01 15:58:57.094665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.533 [2024-10-01 15:58:57.094908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.533 [2024-10-01 15:58:57.094919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.533 [2024-10-01 15:58:57.094926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.533 [2024-10-01 15:58:57.094935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.533 [2024-10-01 15:58:57.094940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.533 [2024-10-01 15:58:57.094947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.533 [2024-10-01 15:58:57.095097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.533 [2024-10-01 15:58:57.095107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.533 [2024-10-01 15:58:57.104884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.533 [2024-10-01 15:58:57.104904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.533 [2024-10-01 15:58:57.105092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-10-01 15:58:57.105104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.533 [2024-10-01 15:58:57.105111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.533 [2024-10-01 15:58:57.105273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-10-01 15:58:57.105283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.533 [2024-10-01 15:58:57.105289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.533 [2024-10-01 15:58:57.105301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.533 [2024-10-01 15:58:57.105310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.533 [2024-10-01 15:58:57.105323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.533 [2024-10-01 15:58:57.105329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.533 [2024-10-01 15:58:57.105335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.533 [2024-10-01 15:58:57.105344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.533 [2024-10-01 15:58:57.105350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.533 [2024-10-01 15:58:57.105355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.533 [2024-10-01 15:58:57.105368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.533 [2024-10-01 15:58:57.105375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.533 [2024-10-01 15:58:57.116656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.533 [2024-10-01 15:58:57.116678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.533 [2024-10-01 15:58:57.117027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-10-01 15:58:57.117044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.533 [2024-10-01 15:58:57.117052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.533 [2024-10-01 15:58:57.117217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-10-01 15:58:57.117227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.533 [2024-10-01 15:58:57.117234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.533 [2024-10-01 15:58:57.117380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.533 [2024-10-01 15:58:57.117393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.533 [2024-10-01 15:58:57.117425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.533 [2024-10-01 15:58:57.117433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.533 [2024-10-01 15:58:57.117440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.533 [2024-10-01 15:58:57.117449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.533 [2024-10-01 15:58:57.117456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.533 [2024-10-01 15:58:57.117462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.533 [2024-10-01 15:58:57.117476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.533 [2024-10-01 15:58:57.117482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.533 [2024-10-01 15:58:57.128426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.533 [2024-10-01 15:58:57.128447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.533 [2024-10-01 15:58:57.128761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-10-01 15:58:57.128776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.533 [2024-10-01 15:58:57.128784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.533 [2024-10-01 15:58:57.128931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-10-01 15:58:57.128941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.533 [2024-10-01 15:58:57.128947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.533 [2024-10-01 15:58:57.129298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.533 [2024-10-01 15:58:57.129313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.533 [2024-10-01 15:58:57.129352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.533 [2024-10-01 15:58:57.129360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.533 [2024-10-01 15:58:57.129366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.533 [2024-10-01 15:58:57.129374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.533 [2024-10-01 15:58:57.129380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.533 [2024-10-01 15:58:57.129387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.533 [2024-10-01 15:58:57.129401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.533 [2024-10-01 15:58:57.129407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.533 [2024-10-01 15:58:57.139404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.533 [2024-10-01 15:58:57.139425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.533 [2024-10-01 15:58:57.139728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-10-01 15:58:57.139744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.533 [2024-10-01 15:58:57.139752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.533 [2024-10-01 15:58:57.139971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-10-01 15:58:57.139983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.533 [2024-10-01 15:58:57.139990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.533 [2024-10-01 15:58:57.140195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.533 [2024-10-01 15:58:57.140210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.533 [2024-10-01 15:58:57.140352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.533 [2024-10-01 15:58:57.140362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.533 [2024-10-01 15:58:57.140369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.533 [2024-10-01 15:58:57.140378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.533 [2024-10-01 15:58:57.140385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.533 [2024-10-01 15:58:57.140391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.533 [2024-10-01 15:58:57.140534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.533 [2024-10-01 15:58:57.140547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.533 [2024-10-01 15:58:57.150512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.533 [2024-10-01 15:58:57.150533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.533 [2024-10-01 15:58:57.150718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-10-01 15:58:57.150730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.533 [2024-10-01 15:58:57.150738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.533 [2024-10-01 15:58:57.150929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.533 [2024-10-01 15:58:57.150939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.533 [2024-10-01 15:58:57.150946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.533 [2024-10-01 15:58:57.151285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.533 [2024-10-01 15:58:57.151298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.533 [2024-10-01 15:58:57.151456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.534 [2024-10-01 15:58:57.151466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.534 [2024-10-01 15:58:57.151473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.534 [2024-10-01 15:58:57.151482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.534 [2024-10-01 15:58:57.151489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.534 [2024-10-01 15:58:57.151495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.534 [2024-10-01 15:58:57.151669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.534 [2024-10-01 15:58:57.151679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.534 [2024-10-01 15:58:57.161860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.534 [2024-10-01 15:58:57.161885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.534 [2024-10-01 15:58:57.162037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-10-01 15:58:57.162049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.534 [2024-10-01 15:58:57.162056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.534 [2024-10-01 15:58:57.162273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-10-01 15:58:57.162283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.534 [2024-10-01 15:58:57.162291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.534 [2024-10-01 15:58:57.162739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.534 [2024-10-01 15:58:57.162752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.534 [2024-10-01 15:58:57.162927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.534 [2024-10-01 15:58:57.162941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.534 [2024-10-01 15:58:57.162948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.534 [2024-10-01 15:58:57.162957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.534 [2024-10-01 15:58:57.162963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.534 [2024-10-01 15:58:57.162969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.534 [2024-10-01 15:58:57.163000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.534 [2024-10-01 15:58:57.163007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.534 [2024-10-01 15:58:57.172573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.534 [2024-10-01 15:58:57.172595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.534 [2024-10-01 15:58:57.172756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-10-01 15:58:57.172768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.534 [2024-10-01 15:58:57.172775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.534 [2024-10-01 15:58:57.172994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-10-01 15:58:57.173004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.534 [2024-10-01 15:58:57.173011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.534 [2024-10-01 15:58:57.173023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.534 [2024-10-01 15:58:57.173032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.534 [2024-10-01 15:58:57.173042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.534 [2024-10-01 15:58:57.173048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.534 [2024-10-01 15:58:57.173055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.534 [2024-10-01 15:58:57.173063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.534 [2024-10-01 15:58:57.173069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.534 [2024-10-01 15:58:57.173075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.534 [2024-10-01 15:58:57.173089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.534 [2024-10-01 15:58:57.173096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.534 [2024-10-01 15:58:57.185428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.534 [2024-10-01 15:58:57.185449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.534 [2024-10-01 15:58:57.185761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-10-01 15:58:57.185777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.534 [2024-10-01 15:58:57.185784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.534 [2024-10-01 15:58:57.186015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-10-01 15:58:57.186030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.534 [2024-10-01 15:58:57.186037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.534 [2024-10-01 15:58:57.186389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.534 [2024-10-01 15:58:57.186404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.534 [2024-10-01 15:58:57.186558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.534 [2024-10-01 15:58:57.186568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.534 [2024-10-01 15:58:57.186575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.534 [2024-10-01 15:58:57.186584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.534 [2024-10-01 15:58:57.186590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.534 [2024-10-01 15:58:57.186596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.534 [2024-10-01 15:58:57.186738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.534 [2024-10-01 15:58:57.186747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.534 [2024-10-01 15:58:57.196107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.534 [2024-10-01 15:58:57.196128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.534 [2024-10-01 15:58:57.196735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-10-01 15:58:57.196753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.534 [2024-10-01 15:58:57.196761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.534 [2024-10-01 15:58:57.196900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-10-01 15:58:57.196911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.534 [2024-10-01 15:58:57.196918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.534 [2024-10-01 15:58:57.197186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.534 [2024-10-01 15:58:57.197200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.534 [2024-10-01 15:58:57.197403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.534 [2024-10-01 15:58:57.197413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.534 [2024-10-01 15:58:57.197420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.534 [2024-10-01 15:58:57.197429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.534 [2024-10-01 15:58:57.197435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.534 [2024-10-01 15:58:57.197442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.534 [2024-10-01 15:58:57.197472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.534 [2024-10-01 15:58:57.197479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.534 [2024-10-01 15:58:57.207305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.534 [2024-10-01 15:58:57.207326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.534 [2024-10-01 15:58:57.207651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-10-01 15:58:57.207667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.534 [2024-10-01 15:58:57.207675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.534 [2024-10-01 15:58:57.207887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.534 [2024-10-01 15:58:57.207899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.534 [2024-10-01 15:58:57.207906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.534 [2024-10-01 15:58:57.208112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.534 [2024-10-01 15:58:57.208127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.534 [2024-10-01 15:58:57.208269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.534 [2024-10-01 15:58:57.208280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.535 [2024-10-01 15:58:57.208287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.535 [2024-10-01 15:58:57.208297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.535 [2024-10-01 15:58:57.208304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.535 [2024-10-01 15:58:57.208311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.535 [2024-10-01 15:58:57.208455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.535 [2024-10-01 15:58:57.208465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.535 [2024-10-01 15:58:57.218771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.535 [2024-10-01 15:58:57.218792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.535 [2024-10-01 15:58:57.219161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-10-01 15:58:57.219178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.535 [2024-10-01 15:58:57.219185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.535 [2024-10-01 15:58:57.219340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-10-01 15:58:57.219350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.535 [2024-10-01 15:58:57.219356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.535 [2024-10-01 15:58:57.219637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.535 [2024-10-01 15:58:57.219652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.535 [2024-10-01 15:58:57.219691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.535 [2024-10-01 15:58:57.219698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.535 [2024-10-01 15:58:57.219707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.535 [2024-10-01 15:58:57.219717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.535 [2024-10-01 15:58:57.219723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.535 [2024-10-01 15:58:57.219729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.535 [2024-10-01 15:58:57.219858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.535 [2024-10-01 15:58:57.219874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.535 [2024-10-01 15:58:57.229487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.535 [2024-10-01 15:58:57.229509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.535 [2024-10-01 15:58:57.229799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-10-01 15:58:57.229815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.535 [2024-10-01 15:58:57.229823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.535 [2024-10-01 15:58:57.229992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-10-01 15:58:57.230004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.535 [2024-10-01 15:58:57.230011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.535 [2024-10-01 15:58:57.230155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.535 [2024-10-01 15:58:57.230168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.535 [2024-10-01 15:58:57.230305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.535 [2024-10-01 15:58:57.230315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.535 [2024-10-01 15:58:57.230322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.535 [2024-10-01 15:58:57.230331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.535 [2024-10-01 15:58:57.230337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.535 [2024-10-01 15:58:57.230343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.535 [2024-10-01 15:58:57.230370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.535 [2024-10-01 15:58:57.230377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.535 [2024-10-01 15:58:57.241068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.535 [2024-10-01 15:58:57.241089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.535 [2024-10-01 15:58:57.241333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-10-01 15:58:57.241346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.535 [2024-10-01 15:58:57.241353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.535 [2024-10-01 15:58:57.241502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-10-01 15:58:57.241512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.535 [2024-10-01 15:58:57.241522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.535 [2024-10-01 15:58:57.241533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.535 [2024-10-01 15:58:57.241543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.535 [2024-10-01 15:58:57.241560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.535 [2024-10-01 15:58:57.241567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.535 [2024-10-01 15:58:57.241573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.535 [2024-10-01 15:58:57.241582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.535 [2024-10-01 15:58:57.241588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.535 [2024-10-01 15:58:57.241594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.535 [2024-10-01 15:58:57.241607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.535 [2024-10-01 15:58:57.241614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.535 [2024-10-01 15:58:57.253169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.535 [2024-10-01 15:58:57.253192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.535 [2024-10-01 15:58:57.253368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-10-01 15:58:57.253380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.535 [2024-10-01 15:58:57.253388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.535 [2024-10-01 15:58:57.253517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-10-01 15:58:57.253527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.535 [2024-10-01 15:58:57.253534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.535 [2024-10-01 15:58:57.254410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.535 [2024-10-01 15:58:57.254425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.535 [2024-10-01 15:58:57.254958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.535 [2024-10-01 15:58:57.254972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.535 [2024-10-01 15:58:57.254978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.535 [2024-10-01 15:58:57.254988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.535 [2024-10-01 15:58:57.254994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.535 [2024-10-01 15:58:57.255001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.535 [2024-10-01 15:58:57.255194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.535 [2024-10-01 15:58:57.255204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.535 [2024-10-01 15:58:57.265500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.535 [2024-10-01 15:58:57.265530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.535 [2024-10-01 15:58:57.265859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.535 [2024-10-01 15:58:57.265882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.535 [2024-10-01 15:58:57.265890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.535 [2024-10-01 15:58:57.265985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-10-01 15:58:57.265994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.536 [2024-10-01 15:58:57.266002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.536 [2024-10-01 15:58:57.266146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.536 [2024-10-01 15:58:57.266158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.536 [2024-10-01 15:58:57.266296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.536 [2024-10-01 15:58:57.266306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.536 [2024-10-01 15:58:57.266312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.536 [2024-10-01 15:58:57.266321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.536 [2024-10-01 15:58:57.266327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.536 [2024-10-01 15:58:57.266333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.536 [2024-10-01 15:58:57.266363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.536 [2024-10-01 15:58:57.266370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.536 [2024-10-01 15:58:57.276593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.536 [2024-10-01 15:58:57.276615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.536 [2024-10-01 15:58:57.276833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-10-01 15:58:57.276847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.536 [2024-10-01 15:58:57.276854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.536 [2024-10-01 15:58:57.276951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-10-01 15:58:57.276962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.536 [2024-10-01 15:58:57.276968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.536 [2024-10-01 15:58:57.277099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.536 [2024-10-01 15:58:57.277111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.536 [2024-10-01 15:58:57.277249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.536 [2024-10-01 15:58:57.277259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.536 [2024-10-01 15:58:57.277266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.536 [2024-10-01 15:58:57.277279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.536 [2024-10-01 15:58:57.277285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.536 [2024-10-01 15:58:57.277291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.536 [2024-10-01 15:58:57.277321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.536 [2024-10-01 15:58:57.277329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.536 [2024-10-01 15:58:57.287724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.536 [2024-10-01 15:58:57.287746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.536 [2024-10-01 15:58:57.287907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-10-01 15:58:57.287920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.536 [2024-10-01 15:58:57.287927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.536 [2024-10-01 15:58:57.288064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-10-01 15:58:57.288074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.536 [2024-10-01 15:58:57.288081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.536 [2024-10-01 15:58:57.288093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.536 [2024-10-01 15:58:57.288102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.536 [2024-10-01 15:58:57.288112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.536 [2024-10-01 15:58:57.288118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.536 [2024-10-01 15:58:57.288124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.536 [2024-10-01 15:58:57.288133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.536 [2024-10-01 15:58:57.288139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.536 [2024-10-01 15:58:57.288145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.536 [2024-10-01 15:58:57.288158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.536 [2024-10-01 15:58:57.288165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.536 [2024-10-01 15:58:57.298536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.536 [2024-10-01 15:58:57.298558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.536 [2024-10-01 15:58:57.298688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-10-01 15:58:57.298701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.536 [2024-10-01 15:58:57.298708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.536 [2024-10-01 15:58:57.298844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-10-01 15:58:57.298854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.536 [2024-10-01 15:58:57.298860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.536 [2024-10-01 15:58:57.299000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.536 [2024-10-01 15:58:57.299011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.536 [2024-10-01 15:58:57.299357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.536 [2024-10-01 15:58:57.299368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.536 [2024-10-01 15:58:57.299375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.536 [2024-10-01 15:58:57.299384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.536 [2024-10-01 15:58:57.299391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.536 [2024-10-01 15:58:57.299396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.536 [2024-10-01 15:58:57.299552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.536 [2024-10-01 15:58:57.299562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.536 [2024-10-01 15:58:57.309618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.536 [2024-10-01 15:58:57.309639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.536 [2024-10-01 15:58:57.309974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-10-01 15:58:57.309992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.536 [2024-10-01 15:58:57.310000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.536 [2024-10-01 15:58:57.310218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-10-01 15:58:57.310229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.536 [2024-10-01 15:58:57.310236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.536 [2024-10-01 15:58:57.310489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.536 [2024-10-01 15:58:57.310502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.536 [2024-10-01 15:58:57.310661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.536 [2024-10-01 15:58:57.310672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.536 [2024-10-01 15:58:57.310678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.536 [2024-10-01 15:58:57.310688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.536 [2024-10-01 15:58:57.310694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.536 [2024-10-01 15:58:57.310700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.536 [2024-10-01 15:58:57.310842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.536 [2024-10-01 15:58:57.310852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.536 [2024-10-01 15:58:57.320843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.536 [2024-10-01 15:58:57.320872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.536 [2024-10-01 15:58:57.321192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-10-01 15:58:57.321209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.536 [2024-10-01 15:58:57.321216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.536 [2024-10-01 15:58:57.321352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.536 [2024-10-01 15:58:57.321362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.536 [2024-10-01 15:58:57.321368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.537 [2024-10-01 15:58:57.321511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.537 [2024-10-01 15:58:57.321523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.537 [2024-10-01 15:58:57.321661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.537 [2024-10-01 15:58:57.321671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.537 [2024-10-01 15:58:57.321678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.537 [2024-10-01 15:58:57.321687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.537 [2024-10-01 15:58:57.321693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.537 [2024-10-01 15:58:57.321699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.537 [2024-10-01 15:58:57.321728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.537 [2024-10-01 15:58:57.321736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.537 [2024-10-01 15:58:57.331897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.537 [2024-10-01 15:58:57.331919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.537 [2024-10-01 15:58:57.332262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-10-01 15:58:57.332278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.537 [2024-10-01 15:58:57.332285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.537 [2024-10-01 15:58:57.332444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-10-01 15:58:57.332454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.537 [2024-10-01 15:58:57.332461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.537 [2024-10-01 15:58:57.332721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.537 [2024-10-01 15:58:57.332735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.537 [2024-10-01 15:58:57.332771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.537 [2024-10-01 15:58:57.332778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.537 [2024-10-01 15:58:57.332785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.537 [2024-10-01 15:58:57.332795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.537 [2024-10-01 15:58:57.332804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.537 [2024-10-01 15:58:57.332810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.537 [2024-10-01 15:58:57.332946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.537 [2024-10-01 15:58:57.332956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.537 [2024-10-01 15:58:57.342923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.537 [2024-10-01 15:58:57.342944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.537 [2024-10-01 15:58:57.343257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-10-01 15:58:57.343273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.537 [2024-10-01 15:58:57.343280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.537 [2024-10-01 15:58:57.343433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-10-01 15:58:57.343443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.537 [2024-10-01 15:58:57.343449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.537 [2024-10-01 15:58:57.343601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.537 [2024-10-01 15:58:57.343613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.537 [2024-10-01 15:58:57.343751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.537 [2024-10-01 15:58:57.343762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.537 [2024-10-01 15:58:57.343768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.537 [2024-10-01 15:58:57.343778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.537 [2024-10-01 15:58:57.343784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.537 [2024-10-01 15:58:57.343790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.537 [2024-10-01 15:58:57.343937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.537 [2024-10-01 15:58:57.343947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.537 [2024-10-01 15:58:57.353903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.537 [2024-10-01 15:58:57.353924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.537 [2024-10-01 15:58:57.354234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-10-01 15:58:57.354251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.537 [2024-10-01 15:58:57.354258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.537 [2024-10-01 15:58:57.354492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-10-01 15:58:57.354503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.537 [2024-10-01 15:58:57.354510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.537 [2024-10-01 15:58:57.354666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.537 [2024-10-01 15:58:57.354684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.537 [2024-10-01 15:58:57.354823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.537 [2024-10-01 15:58:57.354833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.537 [2024-10-01 15:58:57.354839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.537 [2024-10-01 15:58:57.354849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.537 [2024-10-01 15:58:57.354855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.537 [2024-10-01 15:58:57.354861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.537 [2024-10-01 15:58:57.354897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.537 [2024-10-01 15:58:57.354904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.537 [2024-10-01 15:58:57.365404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.537 [2024-10-01 15:58:57.365426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.537 [2024-10-01 15:58:57.365631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-10-01 15:58:57.365645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.537 [2024-10-01 15:58:57.365652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.537 [2024-10-01 15:58:57.365784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-10-01 15:58:57.365794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.537 [2024-10-01 15:58:57.365801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.537 [2024-10-01 15:58:57.366005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.537 [2024-10-01 15:58:57.366020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.537 [2024-10-01 15:58:57.366113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.537 [2024-10-01 15:58:57.366121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.537 [2024-10-01 15:58:57.366128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.537 [2024-10-01 15:58:57.366136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.537 [2024-10-01 15:58:57.366142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.537 [2024-10-01 15:58:57.366148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.537 [2024-10-01 15:58:57.366169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.537 [2024-10-01 15:58:57.366176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.537 [2024-10-01 15:58:57.376570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.537 [2024-10-01 15:58:57.376592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.537 [2024-10-01 15:58:57.376721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-10-01 15:58:57.376733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.537 [2024-10-01 15:58:57.376744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.537 [2024-10-01 15:58:57.376916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.537 [2024-10-01 15:58:57.376927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.537 [2024-10-01 15:58:57.376933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.537 [2024-10-01 15:58:57.377127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.537 [2024-10-01 15:58:57.377140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.537 [2024-10-01 15:58:57.377234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.538 [2024-10-01 15:58:57.377242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.538 [2024-10-01 15:58:57.377249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.538 [2024-10-01 15:58:57.377258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.538 [2024-10-01 15:58:57.377264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.538 [2024-10-01 15:58:57.377270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.538 [2024-10-01 15:58:57.377289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.538 [2024-10-01 15:58:57.377297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.538 [2024-10-01 15:58:57.387282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.538 [2024-10-01 15:58:57.387305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.538 [2024-10-01 15:58:57.387480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-10-01 15:58:57.387493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.538 [2024-10-01 15:58:57.387501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.538 [2024-10-01 15:58:57.387586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-10-01 15:58:57.387595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.538 [2024-10-01 15:58:57.387602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.538 [2024-10-01 15:58:57.387733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.538 [2024-10-01 15:58:57.387745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.538 [2024-10-01 15:58:57.387890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.538 [2024-10-01 15:58:57.387899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.538 [2024-10-01 15:58:57.387906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.538 [2024-10-01 15:58:57.387915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.538 [2024-10-01 15:58:57.387921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.538 [2024-10-01 15:58:57.387932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.538 [2024-10-01 15:58:57.387962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.538 [2024-10-01 15:58:57.387970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.538 [2024-10-01 15:58:57.397837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.538 [2024-10-01 15:58:57.397859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.538 [2024-10-01 15:58:57.398064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-10-01 15:58:57.398078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.538 [2024-10-01 15:58:57.398086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.538 [2024-10-01 15:58:57.398162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-10-01 15:58:57.398172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.538 [2024-10-01 15:58:57.398179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.538 [2024-10-01 15:58:57.398309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.538 [2024-10-01 15:58:57.398321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.538 [2024-10-01 15:58:57.398348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.538 [2024-10-01 15:58:57.398355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.538 [2024-10-01 15:58:57.398362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.538 [2024-10-01 15:58:57.398370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.538 [2024-10-01 15:58:57.398377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.538 [2024-10-01 15:58:57.398383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.538 [2024-10-01 15:58:57.398511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.538 [2024-10-01 15:58:57.398520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.538 [2024-10-01 15:58:57.409808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.538 [2024-10-01 15:58:57.409829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.538 [2024-10-01 15:58:57.410005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-10-01 15:58:57.410019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.538 [2024-10-01 15:58:57.410027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.538 [2024-10-01 15:58:57.410146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-10-01 15:58:57.410156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.538 [2024-10-01 15:58:57.410163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.538 [2024-10-01 15:58:57.410174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.538 [2024-10-01 15:58:57.410183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.538 [2024-10-01 15:58:57.410197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.538 [2024-10-01 15:58:57.410203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.538 [2024-10-01 15:58:57.410210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.538 [2024-10-01 15:58:57.410218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.538 [2024-10-01 15:58:57.410224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.538 [2024-10-01 15:58:57.410230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.538 [2024-10-01 15:58:57.410244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.538 [2024-10-01 15:58:57.410251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.538 [2024-10-01 15:58:57.421084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.538 [2024-10-01 15:58:57.421108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.538 [2024-10-01 15:58:57.421339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-10-01 15:58:57.421353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.538 [2024-10-01 15:58:57.421361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.538 [2024-10-01 15:58:57.421514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-10-01 15:58:57.421525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.538 [2024-10-01 15:58:57.421532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.538 [2024-10-01 15:58:57.421545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.538 [2024-10-01 15:58:57.421554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.538 [2024-10-01 15:58:57.421565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.538 [2024-10-01 15:58:57.421572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.538 [2024-10-01 15:58:57.421578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.538 [2024-10-01 15:58:57.421588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.538 [2024-10-01 15:58:57.421594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.538 [2024-10-01 15:58:57.421600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.538 [2024-10-01 15:58:57.421614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.538 [2024-10-01 15:58:57.421621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.538 [2024-10-01 15:58:57.432652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.538 [2024-10-01 15:58:57.432676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.538 [2024-10-01 15:58:57.433010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-10-01 15:58:57.433027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.538 [2024-10-01 15:58:57.433035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.538 [2024-10-01 15:58:57.433114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.538 [2024-10-01 15:58:57.433124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.538 [2024-10-01 15:58:57.433131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.538 [2024-10-01 15:58:57.433274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.538 [2024-10-01 15:58:57.433287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.538 [2024-10-01 15:58:57.433632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.538 [2024-10-01 15:58:57.433644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.538 [2024-10-01 15:58:57.433651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.539 [2024-10-01 15:58:57.433660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.539 [2024-10-01 15:58:57.433667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.539 [2024-10-01 15:58:57.433673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.539 [2024-10-01 15:58:57.433828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.539 [2024-10-01 15:58:57.433838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.539 [2024-10-01 15:58:57.443655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.539 [2024-10-01 15:58:57.443677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.539 [2024-10-01 15:58:57.443835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-10-01 15:58:57.443848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.539 [2024-10-01 15:58:57.443856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.539 [2024-10-01 15:58:57.443967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-10-01 15:58:57.443977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.539 [2024-10-01 15:58:57.443984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.539 [2024-10-01 15:58:57.443998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.539 [2024-10-01 15:58:57.444010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.539 [2024-10-01 15:58:57.444021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.539 [2024-10-01 15:58:57.444028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.539 [2024-10-01 15:58:57.444034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.539 [2024-10-01 15:58:57.444043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.539 [2024-10-01 15:58:57.444051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.539 [2024-10-01 15:58:57.444057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.539 [2024-10-01 15:58:57.444074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.539 [2024-10-01 15:58:57.444081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.539 [2024-10-01 15:58:57.454587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.539 [2024-10-01 15:58:57.454609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.539 [2024-10-01 15:58:57.454820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-10-01 15:58:57.454833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.539 [2024-10-01 15:58:57.454840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.539 [2024-10-01 15:58:57.455041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-10-01 15:58:57.455053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.539 [2024-10-01 15:58:57.455060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.539 [2024-10-01 15:58:57.455073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.539 [2024-10-01 15:58:57.455082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.539 [2024-10-01 15:58:57.455092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.539 [2024-10-01 15:58:57.455098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.539 [2024-10-01 15:58:57.455105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.539 [2024-10-01 15:58:57.455114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.539 [2024-10-01 15:58:57.455120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.539 [2024-10-01 15:58:57.455127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.539 [2024-10-01 15:58:57.455140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.539 [2024-10-01 15:58:57.455147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.539 [2024-10-01 15:58:57.466580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.539 [2024-10-01 15:58:57.466602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.539 [2024-10-01 15:58:57.466831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-10-01 15:58:57.466846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.539 [2024-10-01 15:58:57.466854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.539 [2024-10-01 15:58:57.466943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-10-01 15:58:57.466953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.539 [2024-10-01 15:58:57.466961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.539 [2024-10-01 15:58:57.467106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.539 [2024-10-01 15:58:57.467118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.539 [2024-10-01 15:58:57.467144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.539 [2024-10-01 15:58:57.467155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.539 [2024-10-01 15:58:57.467162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.539 [2024-10-01 15:58:57.467171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.539 [2024-10-01 15:58:57.467177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.539 [2024-10-01 15:58:57.467183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.539 [2024-10-01 15:58:57.467197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.539 [2024-10-01 15:58:57.467203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.539 [2024-10-01 15:58:57.477401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.539 [2024-10-01 15:58:57.477422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.539 [2024-10-01 15:58:57.477524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-10-01 15:58:57.477536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.539 [2024-10-01 15:58:57.477544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.539 [2024-10-01 15:58:57.477637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-10-01 15:58:57.477647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.539 [2024-10-01 15:58:57.477654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.539 [2024-10-01 15:58:57.477666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.539 [2024-10-01 15:58:57.477676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.539 [2024-10-01 15:58:57.477686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.539 [2024-10-01 15:58:57.477693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.539 [2024-10-01 15:58:57.477700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.539 [2024-10-01 15:58:57.477708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.539 [2024-10-01 15:58:57.477714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.539 [2024-10-01 15:58:57.477720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.539 [2024-10-01 15:58:57.477734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.539 [2024-10-01 15:58:57.477741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.539 [2024-10-01 15:58:57.489600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.539 [2024-10-01 15:58:57.489623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.539 [2024-10-01 15:58:57.489739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-10-01 15:58:57.489752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.539 [2024-10-01 15:58:57.489759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.539 [2024-10-01 15:58:57.489917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.539 [2024-10-01 15:58:57.489927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.540 [2024-10-01 15:58:57.489934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.540 [2024-10-01 15:58:57.489945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.540 [2024-10-01 15:58:57.489954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.540 [2024-10-01 15:58:57.489965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.540 [2024-10-01 15:58:57.489971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.540 [2024-10-01 15:58:57.489977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.540 [2024-10-01 15:58:57.489986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.540 [2024-10-01 15:58:57.489991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.540 [2024-10-01 15:58:57.489997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.540 [2024-10-01 15:58:57.490011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.540 [2024-10-01 15:58:57.490018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.540 [2024-10-01 15:58:57.499682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.540 [2024-10-01 15:58:57.499712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.540 [2024-10-01 15:58:57.499806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-10-01 15:58:57.499818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.540 [2024-10-01 15:58:57.499826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.540 [2024-10-01 15:58:57.499911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-10-01 15:58:57.499921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.540 [2024-10-01 15:58:57.499928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.540 [2024-10-01 15:58:57.499936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.540 [2024-10-01 15:58:57.499947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.540 [2024-10-01 15:58:57.499955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.540 [2024-10-01 15:58:57.499961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.540 [2024-10-01 15:58:57.499967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.540 [2024-10-01 15:58:57.499980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.540 [2024-10-01 15:58:57.499986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.540 [2024-10-01 15:58:57.499991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.540 [2024-10-01 15:58:57.499998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.540 [2024-10-01 15:58:57.500010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.540 [2024-10-01 15:58:57.510029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.540 [2024-10-01 15:58:57.510050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.540 [2024-10-01 15:58:57.510266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-10-01 15:58:57.510279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.540 [2024-10-01 15:58:57.510287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.540 [2024-10-01 15:58:57.510419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-10-01 15:58:57.510429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.540 [2024-10-01 15:58:57.510435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.540 [2024-10-01 15:58:57.510447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.540 [2024-10-01 15:58:57.510456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.540 [2024-10-01 15:58:57.510467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.540 [2024-10-01 15:58:57.510473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.540 [2024-10-01 15:58:57.510479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.540 [2024-10-01 15:58:57.510487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.540 [2024-10-01 15:58:57.510493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.540 [2024-10-01 15:58:57.510499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.540 [2024-10-01 15:58:57.510513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.540 [2024-10-01 15:58:57.510519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.540 [2024-10-01 15:58:57.520108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.540 [2024-10-01 15:58:57.520138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.540 [2024-10-01 15:58:57.520353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-10-01 15:58:57.520365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.540 [2024-10-01 15:58:57.520372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.540 [2024-10-01 15:58:57.520526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-10-01 15:58:57.520536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.540 [2024-10-01 15:58:57.520543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.540 [2024-10-01 15:58:57.520551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.540 [2024-10-01 15:58:57.520563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.540 [2024-10-01 15:58:57.520570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.540 [2024-10-01 15:58:57.520576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.540 [2024-10-01 15:58:57.520585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.540 [2024-10-01 15:58:57.520598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.540 [2024-10-01 15:58:57.520605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.540 [2024-10-01 15:58:57.520611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.540 [2024-10-01 15:58:57.520617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.540 [2024-10-01 15:58:57.520628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.540 [2024-10-01 15:58:57.530640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.540 [2024-10-01 15:58:57.530661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.540 [2024-10-01 15:58:57.530783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-10-01 15:58:57.530795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.540 [2024-10-01 15:58:57.530803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.540 [2024-10-01 15:58:57.530953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-10-01 15:58:57.530964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.540 [2024-10-01 15:58:57.530970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.540 [2024-10-01 15:58:57.530982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.540 [2024-10-01 15:58:57.530991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.540 [2024-10-01 15:58:57.531001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.540 [2024-10-01 15:58:57.531007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.540 [2024-10-01 15:58:57.531013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.540 [2024-10-01 15:58:57.531022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.540 [2024-10-01 15:58:57.531028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.540 [2024-10-01 15:58:57.531034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.540 [2024-10-01 15:58:57.531423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.540 [2024-10-01 15:58:57.531434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.540 [2024-10-01 15:58:57.541868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.540 [2024-10-01 15:58:57.541888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.540 [2024-10-01 15:58:57.542005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-10-01 15:58:57.542018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.540 [2024-10-01 15:58:57.542025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.540 [2024-10-01 15:58:57.542119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.540 [2024-10-01 15:58:57.542128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.540 [2024-10-01 15:58:57.542141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.540 [2024-10-01 15:58:57.542153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.540 [2024-10-01 15:58:57.542162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.541 [2024-10-01 15:58:57.542172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.541 [2024-10-01 15:58:57.542178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.541 [2024-10-01 15:58:57.542184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.541 [2024-10-01 15:58:57.542192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.541 [2024-10-01 15:58:57.542198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.541 [2024-10-01 15:58:57.542204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.541 [2024-10-01 15:58:57.542217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.541 [2024-10-01 15:58:57.542224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.541 [2024-10-01 15:58:57.552684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.541 [2024-10-01 15:58:57.552706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.541 [2024-10-01 15:58:57.553627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-10-01 15:58:57.553646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.541 [2024-10-01 15:58:57.553654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.541 [2024-10-01 15:58:57.553744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-10-01 15:58:57.553754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.541 [2024-10-01 15:58:57.553761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.541 [2024-10-01 15:58:57.553833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.541 [2024-10-01 15:58:57.553843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.541 [2024-10-01 15:58:57.553854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.541 [2024-10-01 15:58:57.553860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.541 [2024-10-01 15:58:57.553872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.541 [2024-10-01 15:58:57.553882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.541 [2024-10-01 15:58:57.553888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.541 [2024-10-01 15:58:57.553894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.541 [2024-10-01 15:58:57.553908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.541 [2024-10-01 15:58:57.553915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.541 [2024-10-01 15:58:57.565876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.541 [2024-10-01 15:58:57.565902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.541 [2024-10-01 15:58:57.566259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-10-01 15:58:57.566275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.541 [2024-10-01 15:58:57.566282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.541 [2024-10-01 15:58:57.566432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-10-01 15:58:57.566442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.541 [2024-10-01 15:58:57.566449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.541 [2024-10-01 15:58:57.566798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.541 [2024-10-01 15:58:57.566813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.541 [2024-10-01 15:58:57.566978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.541 [2024-10-01 15:58:57.566989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.541 [2024-10-01 15:58:57.566996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.541 [2024-10-01 15:58:57.567005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.541 [2024-10-01 15:58:57.567011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.541 [2024-10-01 15:58:57.567018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.541 [2024-10-01 15:58:57.567161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.541 [2024-10-01 15:58:57.567170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.541 [2024-10-01 15:58:57.576904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.541 [2024-10-01 15:58:57.576926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.541 [2024-10-01 15:58:57.577173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-10-01 15:58:57.577190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.541 [2024-10-01 15:58:57.577198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.541 [2024-10-01 15:58:57.577336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-10-01 15:58:57.577346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.541 [2024-10-01 15:58:57.577353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.541 [2024-10-01 15:58:57.577496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.541 [2024-10-01 15:58:57.577508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.541 [2024-10-01 15:58:57.577645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.541 [2024-10-01 15:58:57.577655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.541 [2024-10-01 15:58:57.577662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.541 [2024-10-01 15:58:57.577675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.541 [2024-10-01 15:58:57.577681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.541 [2024-10-01 15:58:57.577687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.541 [2024-10-01 15:58:57.577716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.541 [2024-10-01 15:58:57.577723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.541 [2024-10-01 15:58:57.587886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.541 [2024-10-01 15:58:57.587907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.541 [2024-10-01 15:58:57.588048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-10-01 15:58:57.588060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.541 [2024-10-01 15:58:57.588068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.541 [2024-10-01 15:58:57.588150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-10-01 15:58:57.588159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.541 [2024-10-01 15:58:57.588167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.541 [2024-10-01 15:58:57.588178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.541 [2024-10-01 15:58:57.588187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.541 [2024-10-01 15:58:57.588197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.541 [2024-10-01 15:58:57.588203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.541 [2024-10-01 15:58:57.588210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.541 [2024-10-01 15:58:57.588219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.541 [2024-10-01 15:58:57.588224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.541 [2024-10-01 15:58:57.588230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.541 [2024-10-01 15:58:57.588244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.541 [2024-10-01 15:58:57.588250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.541 [2024-10-01 15:58:57.600263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.541 [2024-10-01 15:58:57.600284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.541 [2024-10-01 15:58:57.600475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-10-01 15:58:57.600487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.541 [2024-10-01 15:58:57.600495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.541 [2024-10-01 15:58:57.600638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.541 [2024-10-01 15:58:57.600647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.541 [2024-10-01 15:58:57.600655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.541 [2024-10-01 15:58:57.600669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.541 [2024-10-01 15:58:57.600678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.541 [2024-10-01 15:58:57.600688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.541 [2024-10-01 15:58:57.600694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.541 [2024-10-01 15:58:57.600700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.542 [2024-10-01 15:58:57.600708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.542 [2024-10-01 15:58:57.600715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.542 [2024-10-01 15:58:57.600721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.542 [2024-10-01 15:58:57.600734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.542 [2024-10-01 15:58:57.600740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.542 [2024-10-01 15:58:57.613425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.542 [2024-10-01 15:58:57.613447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.542 [2024-10-01 15:58:57.613610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-10-01 15:58:57.613622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.542 [2024-10-01 15:58:57.613629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.542 [2024-10-01 15:58:57.613845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-10-01 15:58:57.613854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.542 [2024-10-01 15:58:57.613861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.542 [2024-10-01 15:58:57.614313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.542 [2024-10-01 15:58:57.614326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.542 [2024-10-01 15:58:57.614528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.542 [2024-10-01 15:58:57.614539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.542 [2024-10-01 15:58:57.614545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.542 [2024-10-01 15:58:57.614555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.542 [2024-10-01 15:58:57.614562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.542 [2024-10-01 15:58:57.614568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.542 [2024-10-01 15:58:57.614713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.542 [2024-10-01 15:58:57.614723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.542 [2024-10-01 15:58:57.624738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.542 [2024-10-01 15:58:57.624758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.542 [2024-10-01 15:58:57.624936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-10-01 15:58:57.624949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.542 [2024-10-01 15:58:57.624957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.542 [2024-10-01 15:58:57.625121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-10-01 15:58:57.625131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.542 [2024-10-01 15:58:57.625137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.542 [2024-10-01 15:58:57.625388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.542 [2024-10-01 15:58:57.625401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.542 [2024-10-01 15:58:57.625638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.542 [2024-10-01 15:58:57.625649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.542 [2024-10-01 15:58:57.625656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.542 [2024-10-01 15:58:57.625665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.542 [2024-10-01 15:58:57.625672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.542 [2024-10-01 15:58:57.625678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.542 [2024-10-01 15:58:57.625827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.542 [2024-10-01 15:58:57.625836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.542 [2024-10-01 15:58:57.635649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.542 [2024-10-01 15:58:57.635669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.542 [2024-10-01 15:58:57.635837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-10-01 15:58:57.635849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.542 [2024-10-01 15:58:57.635857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.542 [2024-10-01 15:58:57.635998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-10-01 15:58:57.636008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.542 [2024-10-01 15:58:57.636015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.542 [2024-10-01 15:58:57.636027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.542 [2024-10-01 15:58:57.636036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.542 [2024-10-01 15:58:57.636045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.542 [2024-10-01 15:58:57.636052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.542 [2024-10-01 15:58:57.636058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.542 [2024-10-01 15:58:57.636065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.542 [2024-10-01 15:58:57.636075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.542 [2024-10-01 15:58:57.636081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.542 [2024-10-01 15:58:57.636095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.542 [2024-10-01 15:58:57.636101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.542 [2024-10-01 15:58:57.647888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.542 [2024-10-01 15:58:57.647910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.542 [2024-10-01 15:58:57.648164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-10-01 15:58:57.648180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.542 [2024-10-01 15:58:57.648187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.542 [2024-10-01 15:58:57.648387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-10-01 15:58:57.648398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.542 [2024-10-01 15:58:57.648405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.542 [2024-10-01 15:58:57.648548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.542 [2024-10-01 15:58:57.648561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.542 [2024-10-01 15:58:57.648710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.542 [2024-10-01 15:58:57.648721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.542 [2024-10-01 15:58:57.648727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.542 [2024-10-01 15:58:57.648737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.542 [2024-10-01 15:58:57.648743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.542 [2024-10-01 15:58:57.648750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.542 [2024-10-01 15:58:57.648779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.542 [2024-10-01 15:58:57.648786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.542 [2024-10-01 15:58:57.658611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.542 [2024-10-01 15:58:57.658632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.542 [2024-10-01 15:58:57.658793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-10-01 15:58:57.658805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.542 [2024-10-01 15:58:57.658812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.542 [2024-10-01 15:58:57.658983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.542 [2024-10-01 15:58:57.658993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.542 [2024-10-01 15:58:57.659000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.542 [2024-10-01 15:58:57.659011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.542 [2024-10-01 15:58:57.659024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.542 [2024-10-01 15:58:57.659033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.542 [2024-10-01 15:58:57.659039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.542 [2024-10-01 15:58:57.659045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.542 [2024-10-01 15:58:57.659053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.542 [2024-10-01 15:58:57.659059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.542 [2024-10-01 15:58:57.659065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.543 [2024-10-01 15:58:57.659079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.543 [2024-10-01 15:58:57.659085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.543 [2024-10-01 15:58:57.670796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.543 [2024-10-01 15:58:57.670817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.543 [2024-10-01 15:58:57.670940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-10-01 15:58:57.670953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.543 [2024-10-01 15:58:57.670961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.543 [2024-10-01 15:58:57.671106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-10-01 15:58:57.671116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.543 [2024-10-01 15:58:57.671122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.543 [2024-10-01 15:58:57.671134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.543 [2024-10-01 15:58:57.671143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.543 [2024-10-01 15:58:57.671153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.543 [2024-10-01 15:58:57.671159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.543 [2024-10-01 15:58:57.671166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.543 [2024-10-01 15:58:57.671174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.543 [2024-10-01 15:58:57.671180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.543 [2024-10-01 15:58:57.671186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.543 [2024-10-01 15:58:57.671200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.543 [2024-10-01 15:58:57.671206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.543 [2024-10-01 15:58:57.682174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.543 [2024-10-01 15:58:57.682195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.543 [2024-10-01 15:58:57.682297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-10-01 15:58:57.682313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.543 [2024-10-01 15:58:57.682320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.543 [2024-10-01 15:58:57.682453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-10-01 15:58:57.682463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.543 [2024-10-01 15:58:57.682470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.543 [2024-10-01 15:58:57.682482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.543 [2024-10-01 15:58:57.682491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.543 [2024-10-01 15:58:57.682501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.543 [2024-10-01 15:58:57.682508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.543 [2024-10-01 15:58:57.682514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.543 [2024-10-01 15:58:57.682523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.543 [2024-10-01 15:58:57.682528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.543 [2024-10-01 15:58:57.682534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.543 [2024-10-01 15:58:57.682548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.543 [2024-10-01 15:58:57.682555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.543 [2024-10-01 15:58:57.693258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.543 [2024-10-01 15:58:57.693279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.543 [2024-10-01 15:58:57.693484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-10-01 15:58:57.693496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.543 [2024-10-01 15:58:57.693503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.543 [2024-10-01 15:58:57.693676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-10-01 15:58:57.693686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.543 [2024-10-01 15:58:57.693693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.543 [2024-10-01 15:58:57.693704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.543 [2024-10-01 15:58:57.693713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.543 [2024-10-01 15:58:57.693723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.543 [2024-10-01 15:58:57.693729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.543 [2024-10-01 15:58:57.693735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.543 [2024-10-01 15:58:57.693743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.543 [2024-10-01 15:58:57.693749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.543 [2024-10-01 15:58:57.693759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.543 [2024-10-01 15:58:57.693773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.543 [2024-10-01 15:58:57.693779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.543 [2024-10-01 15:58:57.704646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.543 [2024-10-01 15:58:57.704666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.543 [2024-10-01 15:58:57.704836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-10-01 15:58:57.704849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.543 [2024-10-01 15:58:57.704857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.543 [2024-10-01 15:58:57.704964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-10-01 15:58:57.704975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.543 [2024-10-01 15:58:57.704981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.543 [2024-10-01 15:58:57.704993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.543 [2024-10-01 15:58:57.705002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.543 [2024-10-01 15:58:57.705012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.543 [2024-10-01 15:58:57.705018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.543 [2024-10-01 15:58:57.705024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.543 [2024-10-01 15:58:57.705033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.543 [2024-10-01 15:58:57.705038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.543 [2024-10-01 15:58:57.705044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.543 [2024-10-01 15:58:57.705057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.543 [2024-10-01 15:58:57.705064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.543 [2024-10-01 15:58:57.716835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.543 [2024-10-01 15:58:57.716855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.543 [2024-10-01 15:58:57.717030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-10-01 15:58:57.717042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.543 [2024-10-01 15:58:57.717050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.543 [2024-10-01 15:58:57.717141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.543 [2024-10-01 15:58:57.717151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.543 [2024-10-01 15:58:57.717157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.543 [2024-10-01 15:58:57.717169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.543 [2024-10-01 15:58:57.717178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.543 [2024-10-01 15:58:57.717191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.544 [2024-10-01 15:58:57.717197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.544 [2024-10-01 15:58:57.717203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.544 [2024-10-01 15:58:57.717212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.544 [2024-10-01 15:58:57.717217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.544 [2024-10-01 15:58:57.717223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.544 [2024-10-01 15:58:57.717237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.544 [2024-10-01 15:58:57.717244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.544 [2024-10-01 15:58:57.728467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.544 [2024-10-01 15:58:57.728488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.544 [2024-10-01 15:58:57.728732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-10-01 15:58:57.728748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.544 [2024-10-01 15:58:57.728755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.544 [2024-10-01 15:58:57.728860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-10-01 15:58:57.728876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.544 [2024-10-01 15:58:57.728883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.544 [2024-10-01 15:58:57.729035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.544 [2024-10-01 15:58:57.729048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.544 [2024-10-01 15:58:57.729074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.544 [2024-10-01 15:58:57.729081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.544 [2024-10-01 15:58:57.729088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.544 [2024-10-01 15:58:57.729097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.544 [2024-10-01 15:58:57.729103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.544 [2024-10-01 15:58:57.729109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.544 [2024-10-01 15:58:57.729122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.544 [2024-10-01 15:58:57.729129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.544 [2024-10-01 15:58:57.739229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.544 [2024-10-01 15:58:57.739250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.544 [2024-10-01 15:58:57.739541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-10-01 15:58:57.739556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.544 [2024-10-01 15:58:57.739567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.544 [2024-10-01 15:58:57.739712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-10-01 15:58:57.739722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.544 [2024-10-01 15:58:57.739729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.544 [2024-10-01 15:58:57.739880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.544 [2024-10-01 15:58:57.739893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.544 [2024-10-01 15:58:57.740030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.544 [2024-10-01 15:58:57.740041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.544 [2024-10-01 15:58:57.740047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.544 [2024-10-01 15:58:57.740056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.544 [2024-10-01 15:58:57.740062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.544 [2024-10-01 15:58:57.740068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.544 [2024-10-01 15:58:57.740097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.544 [2024-10-01 15:58:57.740104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.544 [2024-10-01 15:58:57.750776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.544 [2024-10-01 15:58:57.750797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.544 [2024-10-01 15:58:57.750909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-10-01 15:58:57.750922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.544 [2024-10-01 15:58:57.750929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.544 [2024-10-01 15:58:57.751103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-10-01 15:58:57.751113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.544 [2024-10-01 15:58:57.751120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.544 [2024-10-01 15:58:57.751131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.544 [2024-10-01 15:58:57.751140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.544 [2024-10-01 15:58:57.751149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.544 [2024-10-01 15:58:57.751156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.544 [2024-10-01 15:58:57.751162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.544 [2024-10-01 15:58:57.751170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.544 [2024-10-01 15:58:57.751176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.544 [2024-10-01 15:58:57.751182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.544 [2024-10-01 15:58:57.751199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.544 [2024-10-01 15:58:57.751206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.544 [2024-10-01 15:58:57.764053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.544 [2024-10-01 15:58:57.764074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.544 [2024-10-01 15:58:57.764311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-10-01 15:58:57.764332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.544 [2024-10-01 15:58:57.764340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.544 [2024-10-01 15:58:57.764421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-10-01 15:58:57.764431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.544 [2024-10-01 15:58:57.764438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.544 [2024-10-01 15:58:57.764612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.544 [2024-10-01 15:58:57.764625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.544 [2024-10-01 15:58:57.764765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.544 [2024-10-01 15:58:57.764776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.544 [2024-10-01 15:58:57.764782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.544 [2024-10-01 15:58:57.764792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.544 [2024-10-01 15:58:57.764798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.544 [2024-10-01 15:58:57.764804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.544 [2024-10-01 15:58:57.764834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.544 [2024-10-01 15:58:57.764841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.544 [2024-10-01 15:58:57.775203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.544 [2024-10-01 15:58:57.775225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.544 [2024-10-01 15:58:57.775465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-10-01 15:58:57.775480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.544 [2024-10-01 15:58:57.775487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.544 [2024-10-01 15:58:57.775579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.544 [2024-10-01 15:58:57.775588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.544 [2024-10-01 15:58:57.775595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.544 [2024-10-01 15:58:57.775725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.545 [2024-10-01 15:58:57.775737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.545 [2024-10-01 15:58:57.775881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.545 [2024-10-01 15:58:57.775895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.545 [2024-10-01 15:58:57.775902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.545 [2024-10-01 15:58:57.775910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.545 [2024-10-01 15:58:57.775916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.545 [2024-10-01 15:58:57.775922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.545 [2024-10-01 15:58:57.775953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.545 [2024-10-01 15:58:57.775960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.545 [2024-10-01 15:58:57.785936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.545 [2024-10-01 15:58:57.785958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.545 [2024-10-01 15:58:57.786303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-10-01 15:58:57.786319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.545 [2024-10-01 15:58:57.786327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.545 [2024-10-01 15:58:57.786462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-10-01 15:58:57.786472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.545 [2024-10-01 15:58:57.786479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.545 [2024-10-01 15:58:57.786622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.545 [2024-10-01 15:58:57.786634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.545 [2024-10-01 15:58:57.786783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.545 [2024-10-01 15:58:57.786794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.545 [2024-10-01 15:58:57.786801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.545 [2024-10-01 15:58:57.786811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.545 [2024-10-01 15:58:57.786817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.545 [2024-10-01 15:58:57.786823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.545 [2024-10-01 15:58:57.786852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.545 [2024-10-01 15:58:57.786860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.545 [2024-10-01 15:58:57.796459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.545 [2024-10-01 15:58:57.796481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.545 [2024-10-01 15:58:57.796598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-10-01 15:58:57.796611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.545 [2024-10-01 15:58:57.796618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.545 [2024-10-01 15:58:57.796702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-10-01 15:58:57.796712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.545 [2024-10-01 15:58:57.796719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.545 [2024-10-01 15:58:57.796848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.545 [2024-10-01 15:58:57.796861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.545 [2024-10-01 15:58:57.796896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.545 [2024-10-01 15:58:57.796903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.545 [2024-10-01 15:58:57.796910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.545 [2024-10-01 15:58:57.796919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.545 [2024-10-01 15:58:57.796925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.545 [2024-10-01 15:58:57.796931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.545 [2024-10-01 15:58:57.797059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.545 [2024-10-01 15:58:57.797068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.545 [2024-10-01 15:58:57.808714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.545 [2024-10-01 15:58:57.808735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.545 [2024-10-01 15:58:57.808943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-10-01 15:58:57.808957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.545 [2024-10-01 15:58:57.808964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.545 [2024-10-01 15:58:57.809186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-10-01 15:58:57.809196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.545 [2024-10-01 15:58:57.809203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.545 [2024-10-01 15:58:57.809215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.545 [2024-10-01 15:58:57.809224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.545 [2024-10-01 15:58:57.809234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.545 [2024-10-01 15:58:57.809241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.545 [2024-10-01 15:58:57.809247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.545 [2024-10-01 15:58:57.809256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.545 [2024-10-01 15:58:57.809261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.545 [2024-10-01 15:58:57.809268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.545 [2024-10-01 15:58:57.809281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.545 [2024-10-01 15:58:57.809288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.545 [2024-10-01 15:58:57.819502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.545 [2024-10-01 15:58:57.819524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.545 [2024-10-01 15:58:57.819686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-10-01 15:58:57.819699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.545 [2024-10-01 15:58:57.819706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.545 [2024-10-01 15:58:57.819842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-10-01 15:58:57.819852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.545 [2024-10-01 15:58:57.819859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.545 [2024-10-01 15:58:57.819876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.545 [2024-10-01 15:58:57.819886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.545 [2024-10-01 15:58:57.819896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.545 [2024-10-01 15:58:57.819902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.545 [2024-10-01 15:58:57.819909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.545 [2024-10-01 15:58:57.819917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.545 [2024-10-01 15:58:57.819923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.545 [2024-10-01 15:58:57.819929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.545 [2024-10-01 15:58:57.819943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.545 [2024-10-01 15:58:57.819949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.545 [2024-10-01 15:58:57.830287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.545 [2024-10-01 15:58:57.830308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.545 [2024-10-01 15:58:57.830545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-10-01 15:58:57.830567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.545 [2024-10-01 15:58:57.830574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.545 [2024-10-01 15:58:57.830716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.545 [2024-10-01 15:58:57.830726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.545 [2024-10-01 15:58:57.830732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.545 [2024-10-01 15:58:57.830744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.545 [2024-10-01 15:58:57.830753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.545 [2024-10-01 15:58:57.830763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.545 [2024-10-01 15:58:57.830770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.546 [2024-10-01 15:58:57.830782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.546 [2024-10-01 15:58:57.830791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.546 [2024-10-01 15:58:57.830797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.546 [2024-10-01 15:58:57.830803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.546 [2024-10-01 15:58:57.830817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.546 [2024-10-01 15:58:57.830823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.546 [2024-10-01 15:58:57.841695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.546 [2024-10-01 15:58:57.841717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.546 [2024-10-01 15:58:57.842084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-10-01 15:58:57.842101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.546 [2024-10-01 15:58:57.842109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.546 [2024-10-01 15:58:57.842324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-10-01 15:58:57.842334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.546 [2024-10-01 15:58:57.842341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.546 [2024-10-01 15:58:57.842484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.546 [2024-10-01 15:58:57.842497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.546 [2024-10-01 15:58:57.842523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.546 [2024-10-01 15:58:57.842530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.546 [2024-10-01 15:58:57.842537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.546 [2024-10-01 15:58:57.842546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.546 [2024-10-01 15:58:57.842551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.546 [2024-10-01 15:58:57.842557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.546 [2024-10-01 15:58:57.842571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.546 [2024-10-01 15:58:57.842578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.546 [2024-10-01 15:58:57.852210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.546 [2024-10-01 15:58:57.852231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.546 [2024-10-01 15:58:57.852473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-10-01 15:58:57.852492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.546 [2024-10-01 15:58:57.852499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.546 [2024-10-01 15:58:57.852639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-10-01 15:58:57.852648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.546 [2024-10-01 15:58:57.852659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.546 [2024-10-01 15:58:57.852799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.546 [2024-10-01 15:58:57.852811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.546 [2024-10-01 15:58:57.852905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.546 [2024-10-01 15:58:57.852913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.546 [2024-10-01 15:58:57.852920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.546 [2024-10-01 15:58:57.852930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.546 [2024-10-01 15:58:57.852935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.546 [2024-10-01 15:58:57.852941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.546 [2024-10-01 15:58:57.853019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.546 [2024-10-01 15:58:57.853028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.546 [2024-10-01 15:58:57.862553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.546 [2024-10-01 15:58:57.862574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.546 [2024-10-01 15:58:57.862745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-10-01 15:58:57.862758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.546 [2024-10-01 15:58:57.862766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.546 [2024-10-01 15:58:57.862962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-10-01 15:58:57.862972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.546 [2024-10-01 15:58:57.862979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.546 [2024-10-01 15:58:57.863218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.546 [2024-10-01 15:58:57.863232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.546 [2024-10-01 15:58:57.863268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.546 [2024-10-01 15:58:57.863276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.546 [2024-10-01 15:58:57.863282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.546 [2024-10-01 15:58:57.863291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.546 [2024-10-01 15:58:57.863296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.546 [2024-10-01 15:58:57.863303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.546 [2024-10-01 15:58:57.863317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.546 [2024-10-01 15:58:57.863324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.546 [2024-10-01 15:58:57.874330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.546 [2024-10-01 15:58:57.874354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.546 [2024-10-01 15:58:57.874557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-10-01 15:58:57.874570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.546 [2024-10-01 15:58:57.874577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.546 [2024-10-01 15:58:57.874767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-10-01 15:58:57.874784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.546 [2024-10-01 15:58:57.874791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.546 [2024-10-01 15:58:57.874988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.546 [2024-10-01 15:58:57.875003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.546 [2024-10-01 15:58:57.875096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.546 [2024-10-01 15:58:57.875105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.546 [2024-10-01 15:58:57.875111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.546 [2024-10-01 15:58:57.875120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.546 [2024-10-01 15:58:57.875127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.546 [2024-10-01 15:58:57.875133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.546 [2024-10-01 15:58:57.875152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.546 [2024-10-01 15:58:57.875160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.546 [2024-10-01 15:58:57.885000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.546 [2024-10-01 15:58:57.885022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.546 [2024-10-01 15:58:57.885595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-10-01 15:58:57.885613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.546 [2024-10-01 15:58:57.885621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.546 [2024-10-01 15:58:57.885756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.546 [2024-10-01 15:58:57.885766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.546 [2024-10-01 15:58:57.885773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.546 [2024-10-01 15:58:57.885936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.546 [2024-10-01 15:58:57.885949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.546 [2024-10-01 15:58:57.885977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.546 [2024-10-01 15:58:57.885985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.546 [2024-10-01 15:58:57.885991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.546 [2024-10-01 15:58:57.886003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.547 [2024-10-01 15:58:57.886009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.547 [2024-10-01 15:58:57.886015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.547 [2024-10-01 15:58:57.886028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.547 [2024-10-01 15:58:57.886035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.547 [2024-10-01 15:58:57.895955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.547 [2024-10-01 15:58:57.895976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.547 [2024-10-01 15:58:57.896210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-10-01 15:58:57.896225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.547 [2024-10-01 15:58:57.896233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.547 [2024-10-01 15:58:57.896411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-10-01 15:58:57.896422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.547 [2024-10-01 15:58:57.896430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.547 [2024-10-01 15:58:57.896573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.547 [2024-10-01 15:58:57.896585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.547 [2024-10-01 15:58:57.896611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.547 [2024-10-01 15:58:57.896618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.547 [2024-10-01 15:58:57.896625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.547 [2024-10-01 15:58:57.896633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.547 [2024-10-01 15:58:57.896639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.547 [2024-10-01 15:58:57.896646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.547 [2024-10-01 15:58:57.896773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.547 [2024-10-01 15:58:57.896782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.547 [2024-10-01 15:58:57.907117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.547 [2024-10-01 15:58:57.907138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.547 [2024-10-01 15:58:57.907480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-10-01 15:58:57.907496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.547 [2024-10-01 15:58:57.907504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.547 [2024-10-01 15:58:57.907643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-10-01 15:58:57.907653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.547 [2024-10-01 15:58:57.907660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.547 [2024-10-01 15:58:57.907807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.547 [2024-10-01 15:58:57.907819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.547 [2024-10-01 15:58:57.907962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.547 [2024-10-01 15:58:57.907973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.547 [2024-10-01 15:58:57.907980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.547 [2024-10-01 15:58:57.907988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.547 [2024-10-01 15:58:57.907994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.547 [2024-10-01 15:58:57.908000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.547 [2024-10-01 15:58:57.908030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.547 [2024-10-01 15:58:57.908038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.547 [2024-10-01 15:58:57.918124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.547 [2024-10-01 15:58:57.918145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.547 [2024-10-01 15:58:57.918385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-10-01 15:58:57.918404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.547 [2024-10-01 15:58:57.918411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.547 [2024-10-01 15:58:57.918568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-10-01 15:58:57.918578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.547 [2024-10-01 15:58:57.918584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.547 [2024-10-01 15:58:57.918596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.547 [2024-10-01 15:58:57.918605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.547 [2024-10-01 15:58:57.918615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.547 [2024-10-01 15:58:57.918622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.547 [2024-10-01 15:58:57.918628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.547 [2024-10-01 15:58:57.918636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.547 [2024-10-01 15:58:57.918642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.547 [2024-10-01 15:58:57.918648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.547 [2024-10-01 15:58:57.918662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.547 [2024-10-01 15:58:57.918669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.547 [2024-10-01 15:58:57.930596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.547 [2024-10-01 15:58:57.930617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.547 [2024-10-01 15:58:57.930774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-10-01 15:58:57.930787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.547 [2024-10-01 15:58:57.930794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.547 [2024-10-01 15:58:57.930985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-10-01 15:58:57.930996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.547 [2024-10-01 15:58:57.931003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.547 [2024-10-01 15:58:57.931390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.547 [2024-10-01 15:58:57.931404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.547 [2024-10-01 15:58:57.931563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.547 [2024-10-01 15:58:57.931573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.547 [2024-10-01 15:58:57.931580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.547 [2024-10-01 15:58:57.931589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.547 [2024-10-01 15:58:57.931595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.547 [2024-10-01 15:58:57.931601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.547 [2024-10-01 15:58:57.931744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.547 [2024-10-01 15:58:57.931754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.547 [2024-10-01 15:58:57.942528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.547 [2024-10-01 15:58:57.942549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.547 [2024-10-01 15:58:57.942687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-10-01 15:58:57.942699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.547 [2024-10-01 15:58:57.942707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.547 [2024-10-01 15:58:57.942921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.547 [2024-10-01 15:58:57.942931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.547 [2024-10-01 15:58:57.942938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.547 [2024-10-01 15:58:57.943392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.547 [2024-10-01 15:58:57.943406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.547 [2024-10-01 15:58:57.943582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.547 [2024-10-01 15:58:57.943593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.547 [2024-10-01 15:58:57.943599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.547 [2024-10-01 15:58:57.943608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.547 [2024-10-01 15:58:57.943624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.547 [2024-10-01 15:58:57.943630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.547 [2024-10-01 15:58:57.943773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.548 [2024-10-01 15:58:57.943782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.548 [2024-10-01 15:58:57.953506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.548 [2024-10-01 15:58:57.953526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.548 [2024-10-01 15:58:57.953738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-10-01 15:58:57.953751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.548 [2024-10-01 15:58:57.953758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.548 [2024-10-01 15:58:57.953904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-10-01 15:58:57.953914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.548 [2024-10-01 15:58:57.953921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.548 [2024-10-01 15:58:57.953932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.548 [2024-10-01 15:58:57.953941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.548 [2024-10-01 15:58:57.953951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.548 [2024-10-01 15:58:57.953957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.548 [2024-10-01 15:58:57.953964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.548 [2024-10-01 15:58:57.953972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.548 [2024-10-01 15:58:57.953978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.548 [2024-10-01 15:58:57.953984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.548 [2024-10-01 15:58:57.953998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.548 [2024-10-01 15:58:57.954004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.548 [2024-10-01 15:58:57.965900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.548 [2024-10-01 15:58:57.965922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.548 [2024-10-01 15:58:57.966267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-10-01 15:58:57.966283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.548 [2024-10-01 15:58:57.966291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.548 [2024-10-01 15:58:57.966429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-10-01 15:58:57.966439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.548 [2024-10-01 15:58:57.966446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.548 [2024-10-01 15:58:57.966593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.548 [2024-10-01 15:58:57.966609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.548 [2024-10-01 15:58:57.966757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.548 [2024-10-01 15:58:57.966768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.548 [2024-10-01 15:58:57.966774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.548 [2024-10-01 15:58:57.966784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.548 [2024-10-01 15:58:57.966790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.548 [2024-10-01 15:58:57.966796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.548 [2024-10-01 15:58:57.966826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.548 [2024-10-01 15:58:57.966834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.548 [2024-10-01 15:58:57.976893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.548 [2024-10-01 15:58:57.976914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.548 [2024-10-01 15:58:57.977129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-10-01 15:58:57.977141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.548 [2024-10-01 15:58:57.977149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.548 [2024-10-01 15:58:57.977293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-10-01 15:58:57.977303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.548 [2024-10-01 15:58:57.977309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.548 [2024-10-01 15:58:57.977321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.548 [2024-10-01 15:58:57.977330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.548 [2024-10-01 15:58:57.977340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.548 [2024-10-01 15:58:57.977346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.548 [2024-10-01 15:58:57.977353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.548 [2024-10-01 15:58:57.977361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.548 [2024-10-01 15:58:57.977367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.548 [2024-10-01 15:58:57.977373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.548 [2024-10-01 15:58:57.977387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.548 [2024-10-01 15:58:57.977393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.548 11337.50 IOPS, 44.29 MiB/s [2024-10-01 15:58:57.987691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.548 [2024-10-01 15:58:57.987713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.548 [2024-10-01 15:58:57.988064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-10-01 15:58:57.988084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.548 [2024-10-01 15:58:57.988092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.548 [2024-10-01 15:58:57.988309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-10-01 15:58:57.988320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.548 [2024-10-01 15:58:57.988327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.548 [2024-10-01 15:58:57.988526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.548 [2024-10-01 15:58:57.988540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.548 [2024-10-01 15:58:57.988684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.548 [2024-10-01 15:58:57.988694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.548 [2024-10-01 15:58:57.988702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.548 [2024-10-01 15:58:57.988712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.548 [2024-10-01 15:58:57.988718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.548 [2024-10-01 15:58:57.988724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.548 [2024-10-01 15:58:57.988753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.548 [2024-10-01 15:58:57.988761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.548 [2024-10-01 15:58:57.998737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.548 [2024-10-01 15:58:57.998758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.548 [2024-10-01 15:58:57.998976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-10-01 15:58:57.998989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.548 [2024-10-01 15:58:57.998996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.548 [2024-10-01 15:58:57.999141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.548 [2024-10-01 15:58:57.999151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.548 [2024-10-01 15:58:57.999157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.548 [2024-10-01 15:58:57.999169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.549 [2024-10-01 15:58:57.999178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.549 [2024-10-01 15:58:57.999187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.549 [2024-10-01 15:58:57.999194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.549 [2024-10-01 15:58:57.999200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.549 [2024-10-01 15:58:57.999209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.549 [2024-10-01 15:58:57.999215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.549 [2024-10-01 15:58:57.999224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.549 [2024-10-01 15:58:57.999238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.549 [2024-10-01 15:58:57.999244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.549 [2024-10-01 15:58:58.010729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.549 [2024-10-01 15:58:58.010750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.549 [2024-10-01 15:58:58.011130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-10-01 15:58:58.011147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.549 [2024-10-01 15:58:58.011154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.549 [2024-10-01 15:58:58.011383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-10-01 15:58:58.011394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.549 [2024-10-01 15:58:58.011401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.549 [2024-10-01 15:58:58.011429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.549 [2024-10-01 15:58:58.011439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.549 [2024-10-01 15:58:58.011458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.549 [2024-10-01 15:58:58.011465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.549 [2024-10-01 15:58:58.011472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.549 [2024-10-01 15:58:58.011480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.549 [2024-10-01 15:58:58.011486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.549 [2024-10-01 15:58:58.011492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.549 [2024-10-01 15:58:58.011506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.549 [2024-10-01 15:58:58.011512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.549 [2024-10-01 15:58:58.021049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.549 [2024-10-01 15:58:58.021070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.549 [2024-10-01 15:58:58.021282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-10-01 15:58:58.021295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.549 [2024-10-01 15:58:58.021302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.549 [2024-10-01 15:58:58.021439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-10-01 15:58:58.021449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.549 [2024-10-01 15:58:58.021455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.549 [2024-10-01 15:58:58.021466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.549 [2024-10-01 15:58:58.021479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.549 [2024-10-01 15:58:58.021489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.549 [2024-10-01 15:58:58.021495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.549 [2024-10-01 15:58:58.021501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.549 [2024-10-01 15:58:58.021510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.549 [2024-10-01 15:58:58.021515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.549 [2024-10-01 15:58:58.021521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.549 [2024-10-01 15:58:58.022493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.549 [2024-10-01 15:58:58.022507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.549 [2024-10-01 15:58:58.032946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.549 [2024-10-01 15:58:58.032968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.549 [2024-10-01 15:58:58.033358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-10-01 15:58:58.033374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.549 [2024-10-01 15:58:58.033382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.549 [2024-10-01 15:58:58.033582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-10-01 15:58:58.033592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.549 [2024-10-01 15:58:58.033599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.549 [2024-10-01 15:58:58.033849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.549 [2024-10-01 15:58:58.033867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.549 [2024-10-01 15:58:58.034017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.549 [2024-10-01 15:58:58.034027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.549 [2024-10-01 15:58:58.034033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.549 [2024-10-01 15:58:58.034042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.549 [2024-10-01 15:58:58.034048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.549 [2024-10-01 15:58:58.034054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.549 [2024-10-01 15:58:58.034084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.549 [2024-10-01 15:58:58.034091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.549 [2024-10-01 15:58:58.044844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.549 [2024-10-01 15:58:58.044869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.549 [2024-10-01 15:58:58.045081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-10-01 15:58:58.045093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.549 [2024-10-01 15:58:58.045104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.549 [2024-10-01 15:58:58.045189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-10-01 15:58:58.045198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.549 [2024-10-01 15:58:58.045205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.549 [2024-10-01 15:58:58.045217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.549 [2024-10-01 15:58:58.045226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.549 [2024-10-01 15:58:58.045237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.549 [2024-10-01 15:58:58.045243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.549 [2024-10-01 15:58:58.045249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.549 [2024-10-01 15:58:58.045257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.549 [2024-10-01 15:58:58.045263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.549 [2024-10-01 15:58:58.045269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.549 [2024-10-01 15:58:58.045283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.549 [2024-10-01 15:58:58.045289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.549 [2024-10-01 15:58:58.057192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.549 [2024-10-01 15:58:58.057213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.549 [2024-10-01 15:58:58.057451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-10-01 15:58:58.057463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.549 [2024-10-01 15:58:58.057471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.549 [2024-10-01 15:58:58.057613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.549 [2024-10-01 15:58:58.057623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.549 [2024-10-01 15:58:58.057630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.549 [2024-10-01 15:58:58.057642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.549 [2024-10-01 15:58:58.057651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.549 [2024-10-01 15:58:58.057661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.550 [2024-10-01 15:58:58.057667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.550 [2024-10-01 15:58:58.057673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.550 [2024-10-01 15:58:58.057682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.550 [2024-10-01 15:58:58.057688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.550 [2024-10-01 15:58:58.057694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.550 [2024-10-01 15:58:58.057711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.550 [2024-10-01 15:58:58.057717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.550 [2024-10-01 15:58:58.069783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.550 [2024-10-01 15:58:58.069804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.550 [2024-10-01 15:58:58.069972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-10-01 15:58:58.069986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.550 [2024-10-01 15:58:58.069993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.550 [2024-10-01 15:58:58.070185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-10-01 15:58:58.070194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.550 [2024-10-01 15:58:58.070201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.550 [2024-10-01 15:58:58.070212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.550 [2024-10-01 15:58:58.070221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.550 [2024-10-01 15:58:58.070231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.550 [2024-10-01 15:58:58.070237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.550 [2024-10-01 15:58:58.070243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.550 [2024-10-01 15:58:58.070252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.550 [2024-10-01 15:58:58.070257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.550 [2024-10-01 15:58:58.070263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.550 [2024-10-01 15:58:58.070276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.550 [2024-10-01 15:58:58.070283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.550 [2024-10-01 15:58:58.082266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.550 [2024-10-01 15:58:58.082287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.550 [2024-10-01 15:58:58.082529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-10-01 15:58:58.082541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.550 [2024-10-01 15:58:58.082548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.550 [2024-10-01 15:58:58.082766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-10-01 15:58:58.082775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.550 [2024-10-01 15:58:58.082782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.550 [2024-10-01 15:58:58.082793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.550 [2024-10-01 15:58:58.082802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.550 [2024-10-01 15:58:58.082817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.550 [2024-10-01 15:58:58.082823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.550 [2024-10-01 15:58:58.082830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.550 [2024-10-01 15:58:58.082838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.550 [2024-10-01 15:58:58.082844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.550 [2024-10-01 15:58:58.082850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.550 [2024-10-01 15:58:58.082868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.550 [2024-10-01 15:58:58.082875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.550 [2024-10-01 15:58:58.092345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.550 [2024-10-01 15:58:58.092375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.550 [2024-10-01 15:58:58.092631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-10-01 15:58:58.092644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.550 [2024-10-01 15:58:58.092652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.550 [2024-10-01 15:58:58.092847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-10-01 15:58:58.092858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.550 [2024-10-01 15:58:58.092870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.550 [2024-10-01 15:58:58.092879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.550 [2024-10-01 15:58:58.094568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.550 [2024-10-01 15:58:58.094587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.550 [2024-10-01 15:58:58.094593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.550 [2024-10-01 15:58:58.094600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.550 [2024-10-01 15:58:58.095765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.550 [2024-10-01 15:58:58.095782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.550 [2024-10-01 15:58:58.095788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.550 [2024-10-01 15:58:58.095794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.550 [2024-10-01 15:58:58.096180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.550 [2024-10-01 15:58:58.103402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.550 [2024-10-01 15:58:58.103422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.550 [2024-10-01 15:58:58.103724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-10-01 15:58:58.103739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.550 [2024-10-01 15:58:58.103746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.550 [2024-10-01 15:58:58.103827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-10-01 15:58:58.103837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.550 [2024-10-01 15:58:58.103843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.550 [2024-10-01 15:58:58.105558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.550 [2024-10-01 15:58:58.105577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.550 [2024-10-01 15:58:58.106486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.550 [2024-10-01 15:58:58.106498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.550 [2024-10-01 15:58:58.106505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.550 [2024-10-01 15:58:58.106515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.550 [2024-10-01 15:58:58.106521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.550 [2024-10-01 15:58:58.106527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.550 [2024-10-01 15:58:58.107048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.550 [2024-10-01 15:58:58.107061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.550 [2024-10-01 15:58:58.116398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.550 [2024-10-01 15:58:58.116419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.550 [2024-10-01 15:58:58.116656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-10-01 15:58:58.116668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.550 [2024-10-01 15:58:58.116676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.550 [2024-10-01 15:58:58.116839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.550 [2024-10-01 15:58:58.116848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.550 [2024-10-01 15:58:58.116855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.550 [2024-10-01 15:58:58.116872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.550 [2024-10-01 15:58:58.116881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.550 [2024-10-01 15:58:58.116891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.550 [2024-10-01 15:58:58.116897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.550 [2024-10-01 15:58:58.116903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.550 [2024-10-01 15:58:58.116911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.551 [2024-10-01 15:58:58.116917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.551 [2024-10-01 15:58:58.116923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.551 [2024-10-01 15:58:58.116936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.551 [2024-10-01 15:58:58.116949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.551 [2024-10-01 15:58:58.128827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.551 [2024-10-01 15:58:58.128848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.551 [2024-10-01 15:58:58.129075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-10-01 15:58:58.129088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.551 [2024-10-01 15:58:58.129095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.551 [2024-10-01 15:58:58.129311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-10-01 15:58:58.129322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.551 [2024-10-01 15:58:58.129328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.551 [2024-10-01 15:58:58.129783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.551 [2024-10-01 15:58:58.129796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.551 [2024-10-01 15:58:58.129969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.551 [2024-10-01 15:58:58.129980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.551 [2024-10-01 15:58:58.129987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.551 [2024-10-01 15:58:58.129996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.551 [2024-10-01 15:58:58.130002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.551 [2024-10-01 15:58:58.130008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.551 [2024-10-01 15:58:58.130150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.551 [2024-10-01 15:58:58.130160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.551 [2024-10-01 15:58:58.139614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.551 [2024-10-01 15:58:58.139634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.551 [2024-10-01 15:58:58.139873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-10-01 15:58:58.139886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.551 [2024-10-01 15:58:58.139893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.551 [2024-10-01 15:58:58.140037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-10-01 15:58:58.140046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.551 [2024-10-01 15:58:58.140053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.551 [2024-10-01 15:58:58.140064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.551 [2024-10-01 15:58:58.140074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.551 [2024-10-01 15:58:58.140083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.551 [2024-10-01 15:58:58.140093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.551 [2024-10-01 15:58:58.140100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.551 [2024-10-01 15:58:58.140108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.551 [2024-10-01 15:58:58.140114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.551 [2024-10-01 15:58:58.140120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.551 [2024-10-01 15:58:58.140133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.551 [2024-10-01 15:58:58.140140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.551 [2024-10-01 15:58:58.152627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.551 [2024-10-01 15:58:58.152650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.551 [2024-10-01 15:58:58.152935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-10-01 15:58:58.152949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.551 [2024-10-01 15:58:58.152956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.551 [2024-10-01 15:58:58.153100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-10-01 15:58:58.153110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.551 [2024-10-01 15:58:58.153116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.551 [2024-10-01 15:58:58.153308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.551 [2024-10-01 15:58:58.153321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.551 [2024-10-01 15:58:58.153415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.551 [2024-10-01 15:58:58.153423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.551 [2024-10-01 15:58:58.153429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.551 [2024-10-01 15:58:58.153438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.551 [2024-10-01 15:58:58.153444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.551 [2024-10-01 15:58:58.153450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.551 [2024-10-01 15:58:58.153619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.551 [2024-10-01 15:58:58.153629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.551 [2024-10-01 15:58:58.164121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.551 [2024-10-01 15:58:58.164142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.551 [2024-10-01 15:58:58.164808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-10-01 15:58:58.164826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.551 [2024-10-01 15:58:58.164833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.551 [2024-10-01 15:58:58.164981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-10-01 15:58:58.164995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.551 [2024-10-01 15:58:58.165002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.551 [2024-10-01 15:58:58.165285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.551 [2024-10-01 15:58:58.165299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.551 [2024-10-01 15:58:58.165335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.551 [2024-10-01 15:58:58.165342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.551 [2024-10-01 15:58:58.165349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.551 [2024-10-01 15:58:58.165358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.551 [2024-10-01 15:58:58.165364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.551 [2024-10-01 15:58:58.165371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.551 [2024-10-01 15:58:58.165384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.551 [2024-10-01 15:58:58.165391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.551 [2024-10-01 15:58:58.174352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.551 [2024-10-01 15:58:58.174373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.551 [2024-10-01 15:58:58.174609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-10-01 15:58:58.174622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.551 [2024-10-01 15:58:58.174629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.551 [2024-10-01 15:58:58.174822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.551 [2024-10-01 15:58:58.174833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.551 [2024-10-01 15:58:58.174840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.551 [2024-10-01 15:58:58.174851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.551 [2024-10-01 15:58:58.174861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.551 [2024-10-01 15:58:58.174876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.551 [2024-10-01 15:58:58.174882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.551 [2024-10-01 15:58:58.174889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.551 [2024-10-01 15:58:58.174897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.551 [2024-10-01 15:58:58.174903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.551 [2024-10-01 15:58:58.174909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.551 [2024-10-01 15:58:58.174923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.552 [2024-10-01 15:58:58.174930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.552 [2024-10-01 15:58:58.184932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.552 [2024-10-01 15:58:58.184953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.552 [2024-10-01 15:58:58.185143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-10-01 15:58:58.185156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.552 [2024-10-01 15:58:58.185164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.552 [2024-10-01 15:58:58.185378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-10-01 15:58:58.185388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.552 [2024-10-01 15:58:58.185394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.552 [2024-10-01 15:58:58.185406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.552 [2024-10-01 15:58:58.185415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.552 [2024-10-01 15:58:58.185425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.552 [2024-10-01 15:58:58.185432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.552 [2024-10-01 15:58:58.185439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.552 [2024-10-01 15:58:58.185448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.552 [2024-10-01 15:58:58.185453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.552 [2024-10-01 15:58:58.185459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.552 [2024-10-01 15:58:58.185473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.552 [2024-10-01 15:58:58.185480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.552 [2024-10-01 15:58:58.195884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.552 [2024-10-01 15:58:58.195904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.552 [2024-10-01 15:58:58.196074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-10-01 15:58:58.196086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.552 [2024-10-01 15:58:58.196094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.552 [2024-10-01 15:58:58.196308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-10-01 15:58:58.196319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.552 [2024-10-01 15:58:58.196325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.552 [2024-10-01 15:58:58.196337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.552 [2024-10-01 15:58:58.196346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.552 [2024-10-01 15:58:58.196356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.552 [2024-10-01 15:58:58.196362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.552 [2024-10-01 15:58:58.196372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.552 [2024-10-01 15:58:58.196380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.552 [2024-10-01 15:58:58.196386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.552 [2024-10-01 15:58:58.196392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.552 [2024-10-01 15:58:58.196406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.552 [2024-10-01 15:58:58.196412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.552 [2024-10-01 15:58:58.206194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.552 [2024-10-01 15:58:58.206215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.552 [2024-10-01 15:58:58.206447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-10-01 15:58:58.206460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.552 [2024-10-01 15:58:58.206468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.552 [2024-10-01 15:58:58.206602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-10-01 15:58:58.206612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.552 [2024-10-01 15:58:58.206618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.552 [2024-10-01 15:58:58.206630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.552 [2024-10-01 15:58:58.206639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.552 [2024-10-01 15:58:58.206649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.552 [2024-10-01 15:58:58.206655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.552 [2024-10-01 15:58:58.206662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.552 [2024-10-01 15:58:58.206671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.552 [2024-10-01 15:58:58.206677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.552 [2024-10-01 15:58:58.206682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.552 [2024-10-01 15:58:58.206696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.552 [2024-10-01 15:58:58.206703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.552 [2024-10-01 15:58:58.217934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.552 [2024-10-01 15:58:58.217955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.552 [2024-10-01 15:58:58.218261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-10-01 15:58:58.218276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.552 [2024-10-01 15:58:58.218284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.552 [2024-10-01 15:58:58.218498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-10-01 15:58:58.218508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.552 [2024-10-01 15:58:58.218519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.552 [2024-10-01 15:58:58.219208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.552 [2024-10-01 15:58:58.219226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.552 [2024-10-01 15:58:58.219540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.552 [2024-10-01 15:58:58.219551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.552 [2024-10-01 15:58:58.219558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.552 [2024-10-01 15:58:58.219567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.552 [2024-10-01 15:58:58.219573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.552 [2024-10-01 15:58:58.219580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.552 [2024-10-01 15:58:58.219622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.552 [2024-10-01 15:58:58.219630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.552 [2024-10-01 15:58:58.228015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.552 [2024-10-01 15:58:58.228044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.552 [2024-10-01 15:58:58.228275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-10-01 15:58:58.228295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.552 [2024-10-01 15:58:58.228303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.552 [2024-10-01 15:58:58.228522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-10-01 15:58:58.228533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.552 [2024-10-01 15:58:58.228540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.552 [2024-10-01 15:58:58.228548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.553 [2024-10-01 15:58:58.228674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.553 [2024-10-01 15:58:58.228685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.553 [2024-10-01 15:58:58.228691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.553 [2024-10-01 15:58:58.228697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.553 [2024-10-01 15:58:58.228851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.553 [2024-10-01 15:58:58.228861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.553 [2024-10-01 15:58:58.228873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.553 [2024-10-01 15:58:58.228880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.553 [2024-10-01 15:58:58.228905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.553 [2024-10-01 15:58:58.238452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.553 [2024-10-01 15:58:58.238475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.553 [2024-10-01 15:58:58.238651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-10-01 15:58:58.238664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.553 [2024-10-01 15:58:58.238671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.553 [2024-10-01 15:58:58.238810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-10-01 15:58:58.238820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.553 [2024-10-01 15:58:58.238827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.553 [2024-10-01 15:58:58.239325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.553 [2024-10-01 15:58:58.239341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.553 [2024-10-01 15:58:58.239715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.553 [2024-10-01 15:58:58.239726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.553 [2024-10-01 15:58:58.239732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.553 [2024-10-01 15:58:58.239742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.553 [2024-10-01 15:58:58.239748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.553 [2024-10-01 15:58:58.239754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.553 [2024-10-01 15:58:58.239914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.553 [2024-10-01 15:58:58.239925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.553 [2024-10-01 15:58:58.249345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.553 [2024-10-01 15:58:58.249365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.553 [2024-10-01 15:58:58.249553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-10-01 15:58:58.249565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.553 [2024-10-01 15:58:58.249572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.553 [2024-10-01 15:58:58.249735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-10-01 15:58:58.249744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.553 [2024-10-01 15:58:58.249751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.553 [2024-10-01 15:58:58.249763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.553 [2024-10-01 15:58:58.249771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.553 [2024-10-01 15:58:58.249781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.553 [2024-10-01 15:58:58.249787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.553 [2024-10-01 15:58:58.249794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.553 [2024-10-01 15:58:58.249802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.553 [2024-10-01 15:58:58.249811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.553 [2024-10-01 15:58:58.249817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.553 [2024-10-01 15:58:58.249830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.553 [2024-10-01 15:58:58.249837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.553 [2024-10-01 15:58:58.261531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.553 [2024-10-01 15:58:58.261553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.553 [2024-10-01 15:58:58.261844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-10-01 15:58:58.261861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.553 [2024-10-01 15:58:58.261874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.553 [2024-10-01 15:58:58.262067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-10-01 15:58:58.262078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.553 [2024-10-01 15:58:58.262084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.553 [2024-10-01 15:58:58.262289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.553 [2024-10-01 15:58:58.262303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.553 [2024-10-01 15:58:58.262446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.553 [2024-10-01 15:58:58.262457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.553 [2024-10-01 15:58:58.262464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.553 [2024-10-01 15:58:58.262474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.553 [2024-10-01 15:58:58.262480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.553 [2024-10-01 15:58:58.262486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.553 [2024-10-01 15:58:58.262517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.553 [2024-10-01 15:58:58.262525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.553 [2024-10-01 15:58:58.273697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.553 [2024-10-01 15:58:58.273719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.553 [2024-10-01 15:58:58.274064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-10-01 15:58:58.274081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.553 [2024-10-01 15:58:58.274089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.553 [2024-10-01 15:58:58.274282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-10-01 15:58:58.274292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.553 [2024-10-01 15:58:58.274299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.553 [2024-10-01 15:58:58.274451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.553 [2024-10-01 15:58:58.274464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.553 [2024-10-01 15:58:58.274603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.553 [2024-10-01 15:58:58.274613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.553 [2024-10-01 15:58:58.274619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.553 [2024-10-01 15:58:58.274628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.553 [2024-10-01 15:58:58.274635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.553 [2024-10-01 15:58:58.274641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.553 [2024-10-01 15:58:58.274670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.553 [2024-10-01 15:58:58.274679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.553 [2024-10-01 15:58:58.284490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.553 [2024-10-01 15:58:58.284512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.553 [2024-10-01 15:58:58.284726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-10-01 15:58:58.284738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.553 [2024-10-01 15:58:58.284745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.553 [2024-10-01 15:58:58.285002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.553 [2024-10-01 15:58:58.285013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.553 [2024-10-01 15:58:58.285020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.554 [2024-10-01 15:58:58.285032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.554 [2024-10-01 15:58:58.285041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.554 [2024-10-01 15:58:58.285051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.554 [2024-10-01 15:58:58.285057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.554 [2024-10-01 15:58:58.285063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.554 [2024-10-01 15:58:58.285072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.554 [2024-10-01 15:58:58.285077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.554 [2024-10-01 15:58:58.285084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.554 [2024-10-01 15:58:58.285097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.554 [2024-10-01 15:58:58.285104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.554 [2024-10-01 15:58:58.296014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.554 [2024-10-01 15:58:58.296036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.554 [2024-10-01 15:58:58.296197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-10-01 15:58:58.296213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.554 [2024-10-01 15:58:58.296221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.554 [2024-10-01 15:58:58.296360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-10-01 15:58:58.296370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.554 [2024-10-01 15:58:58.296377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.554 [2024-10-01 15:58:58.296388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.554 [2024-10-01 15:58:58.296397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.554 [2024-10-01 15:58:58.296407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.554 [2024-10-01 15:58:58.296413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.554 [2024-10-01 15:58:58.296419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.554 [2024-10-01 15:58:58.296427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.554 [2024-10-01 15:58:58.296433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.554 [2024-10-01 15:58:58.296439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.554 [2024-10-01 15:58:58.296452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.554 [2024-10-01 15:58:58.296459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.554 [2024-10-01 15:58:58.308047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.554 [2024-10-01 15:58:58.308069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.554 [2024-10-01 15:58:58.308262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-10-01 15:58:58.308275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.554 [2024-10-01 15:58:58.308282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.554 [2024-10-01 15:58:58.308445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-10-01 15:58:58.308454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.554 [2024-10-01 15:58:58.308461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.554 [2024-10-01 15:58:58.308472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.554 [2024-10-01 15:58:58.308481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.554 [2024-10-01 15:58:58.308491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.554 [2024-10-01 15:58:58.308498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.554 [2024-10-01 15:58:58.308504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.554 [2024-10-01 15:58:58.308513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.554 [2024-10-01 15:58:58.308518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.554 [2024-10-01 15:58:58.308528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.554 [2024-10-01 15:58:58.308541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.554 [2024-10-01 15:58:58.308548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.554 [2024-10-01 15:58:58.319010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.554 [2024-10-01 15:58:58.319031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.554 [2024-10-01 15:58:58.319396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-10-01 15:58:58.319412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.554 [2024-10-01 15:58:58.319419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.554 [2024-10-01 15:58:58.319588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-10-01 15:58:58.319599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.554 [2024-10-01 15:58:58.319606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.554 [2024-10-01 15:58:58.319749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.554 [2024-10-01 15:58:58.319762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.554 [2024-10-01 15:58:58.319787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.554 [2024-10-01 15:58:58.319795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.554 [2024-10-01 15:58:58.319802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.554 [2024-10-01 15:58:58.319811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.554 [2024-10-01 15:58:58.319816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.554 [2024-10-01 15:58:58.319822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.554 [2024-10-01 15:58:58.319836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.554 [2024-10-01 15:58:58.319843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.554 [2024-10-01 15:58:58.330519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.554 [2024-10-01 15:58:58.330541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.554 [2024-10-01 15:58:58.331135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-10-01 15:58:58.331153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.554 [2024-10-01 15:58:58.331161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.554 [2024-10-01 15:58:58.331386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-10-01 15:58:58.331396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.554 [2024-10-01 15:58:58.331404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.554 [2024-10-01 15:58:58.331560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.554 [2024-10-01 15:58:58.331580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.554 [2024-10-01 15:58:58.331624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.554 [2024-10-01 15:58:58.331631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.554 [2024-10-01 15:58:58.331638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.554 [2024-10-01 15:58:58.331647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.554 [2024-10-01 15:58:58.331653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.554 [2024-10-01 15:58:58.331659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.554 [2024-10-01 15:58:58.331672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.554 [2024-10-01 15:58:58.331679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.554 [2024-10-01 15:58:58.341695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.554 [2024-10-01 15:58:58.341716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.554 [2024-10-01 15:58:58.342054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-10-01 15:58:58.342071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.554 [2024-10-01 15:58:58.342079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.554 [2024-10-01 15:58:58.342221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.554 [2024-10-01 15:58:58.342231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.554 [2024-10-01 15:58:58.342238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.554 [2024-10-01 15:58:58.342385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.555 [2024-10-01 15:58:58.342397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.555 [2024-10-01 15:58:58.342535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.555 [2024-10-01 15:58:58.342545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.555 [2024-10-01 15:58:58.342552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.555 [2024-10-01 15:58:58.342560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.555 [2024-10-01 15:58:58.342566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.555 [2024-10-01 15:58:58.342572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.555 [2024-10-01 15:58:58.342602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.555 [2024-10-01 15:58:58.342610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.555 [2024-10-01 15:58:58.352659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.555 [2024-10-01 15:58:58.352680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.555 [2024-10-01 15:58:58.352937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-10-01 15:58:58.352957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.555 [2024-10-01 15:58:58.352968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.555 [2024-10-01 15:58:58.353177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-10-01 15:58:58.353188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.555 [2024-10-01 15:58:58.353195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.555 [2024-10-01 15:58:58.353326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.555 [2024-10-01 15:58:58.353338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.555 [2024-10-01 15:58:58.353364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.555 [2024-10-01 15:58:58.353372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.555 [2024-10-01 15:58:58.353378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.555 [2024-10-01 15:58:58.353387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.555 [2024-10-01 15:58:58.353393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.555 [2024-10-01 15:58:58.353399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.555 [2024-10-01 15:58:58.353413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.555 [2024-10-01 15:58:58.353420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.555 [2024-10-01 15:58:58.363037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.555 [2024-10-01 15:58:58.363058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.555 [2024-10-01 15:58:58.363338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-10-01 15:58:58.363354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.555 [2024-10-01 15:58:58.363361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.555 [2024-10-01 15:58:58.363451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-10-01 15:58:58.363461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.555 [2024-10-01 15:58:58.363468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.555 [2024-10-01 15:58:58.363672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.555 [2024-10-01 15:58:58.363685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.555 [2024-10-01 15:58:58.363715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.555 [2024-10-01 15:58:58.363722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.555 [2024-10-01 15:58:58.363729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.555 [2024-10-01 15:58:58.363739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.555 [2024-10-01 15:58:58.363744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.555 [2024-10-01 15:58:58.363751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.555 [2024-10-01 15:58:58.363891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.555 [2024-10-01 15:58:58.363901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.555 [2024-10-01 15:58:58.374315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.555 [2024-10-01 15:58:58.374335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.555 [2024-10-01 15:58:58.374585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-10-01 15:58:58.374597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.555 [2024-10-01 15:58:58.374604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.555 [2024-10-01 15:58:58.374799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-10-01 15:58:58.374810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.555 [2024-10-01 15:58:58.374817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.555 [2024-10-01 15:58:58.375267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.555 [2024-10-01 15:58:58.375281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.555 [2024-10-01 15:58:58.375479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.555 [2024-10-01 15:58:58.375489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.555 [2024-10-01 15:58:58.375496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.555 [2024-10-01 15:58:58.375505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.555 [2024-10-01 15:58:58.375511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.555 [2024-10-01 15:58:58.375517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.555 [2024-10-01 15:58:58.375662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.555 [2024-10-01 15:58:58.375671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.555 [2024-10-01 15:58:58.385806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.555 [2024-10-01 15:58:58.385827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.555 [2024-10-01 15:58:58.386175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-10-01 15:58:58.386191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.555 [2024-10-01 15:58:58.386199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.555 [2024-10-01 15:58:58.386394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-10-01 15:58:58.386405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.555 [2024-10-01 15:58:58.386411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.555 [2024-10-01 15:58:58.386693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.555 [2024-10-01 15:58:58.386707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.555 [2024-10-01 15:58:58.386876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.555 [2024-10-01 15:58:58.386888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.555 [2024-10-01 15:58:58.386894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.555 [2024-10-01 15:58:58.386903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.555 [2024-10-01 15:58:58.386909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.555 [2024-10-01 15:58:58.386916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.555 [2024-10-01 15:58:58.387059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.555 [2024-10-01 15:58:58.387069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.555 [2024-10-01 15:58:58.396852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.555 [2024-10-01 15:58:58.396878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.555 [2024-10-01 15:58:58.397088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-10-01 15:58:58.397101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.555 [2024-10-01 15:58:58.397108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.555 [2024-10-01 15:58:58.397245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.555 [2024-10-01 15:58:58.397255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.555 [2024-10-01 15:58:58.397262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.555 [2024-10-01 15:58:58.397273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.555 [2024-10-01 15:58:58.397282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.556 [2024-10-01 15:58:58.397292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.556 [2024-10-01 15:58:58.397299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.556 [2024-10-01 15:58:58.397305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.556 [2024-10-01 15:58:58.397314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.556 [2024-10-01 15:58:58.397319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.556 [2024-10-01 15:58:58.397325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.556 [2024-10-01 15:58:58.397339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.556 [2024-10-01 15:58:58.397346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.556 [2024-10-01 15:58:58.409632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.556 [2024-10-01 15:58:58.409653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.556 [2024-10-01 15:58:58.409805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-10-01 15:58:58.409817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.556 [2024-10-01 15:58:58.409825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.556 [2024-10-01 15:58:58.410048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-10-01 15:58:58.410059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.556 [2024-10-01 15:58:58.410065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.556 [2024-10-01 15:58:58.410077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.556 [2024-10-01 15:58:58.410086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.556 [2024-10-01 15:58:58.410096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.556 [2024-10-01 15:58:58.410102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.556 [2024-10-01 15:58:58.410109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.556 [2024-10-01 15:58:58.410118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.556 [2024-10-01 15:58:58.410123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.556 [2024-10-01 15:58:58.410129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.556 [2024-10-01 15:58:58.410142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.556 [2024-10-01 15:58:58.410150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.556 [2024-10-01 15:58:58.420383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.556 [2024-10-01 15:58:58.420404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.556 [2024-10-01 15:58:58.420566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-10-01 15:58:58.420579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.556 [2024-10-01 15:58:58.420586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.556 [2024-10-01 15:58:58.420782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-10-01 15:58:58.420792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.556 [2024-10-01 15:58:58.420798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.556 [2024-10-01 15:58:58.420810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.556 [2024-10-01 15:58:58.420819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.556 [2024-10-01 15:58:58.420829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.556 [2024-10-01 15:58:58.420835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.556 [2024-10-01 15:58:58.420841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.556 [2024-10-01 15:58:58.420849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.556 [2024-10-01 15:58:58.420855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.556 [2024-10-01 15:58:58.420861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.556 [2024-10-01 15:58:58.420882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.556 [2024-10-01 15:58:58.420892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.556 [2024-10-01 15:58:58.432169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.556 [2024-10-01 15:58:58.432191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.556 [2024-10-01 15:58:58.432469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-10-01 15:58:58.432484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.556 [2024-10-01 15:58:58.432491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.556 [2024-10-01 15:58:58.432732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-10-01 15:58:58.432743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.556 [2024-10-01 15:58:58.432750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.556 [2024-10-01 15:58:58.433667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.556 [2024-10-01 15:58:58.433683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.556 [2024-10-01 15:58:58.434175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.556 [2024-10-01 15:58:58.434187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.556 [2024-10-01 15:58:58.434193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.556 [2024-10-01 15:58:58.434203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.556 [2024-10-01 15:58:58.434209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.556 [2024-10-01 15:58:58.434215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.556 [2024-10-01 15:58:58.434378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.556 [2024-10-01 15:58:58.434388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.556 [2024-10-01 15:58:58.443814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.556 [2024-10-01 15:58:58.443834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.556 [2024-10-01 15:58:58.444818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-10-01 15:58:58.444835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.556 [2024-10-01 15:58:58.444843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.556 [2024-10-01 15:58:58.445003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-10-01 15:58:58.445013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.556 [2024-10-01 15:58:58.445020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.556 [2024-10-01 15:58:58.445488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.556 [2024-10-01 15:58:58.445503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.556 [2024-10-01 15:58:58.445694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.556 [2024-10-01 15:58:58.445708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.556 [2024-10-01 15:58:58.445715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.556 [2024-10-01 15:58:58.445725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.556 [2024-10-01 15:58:58.445731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.556 [2024-10-01 15:58:58.445737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.556 [2024-10-01 15:58:58.445770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.556 [2024-10-01 15:58:58.445778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.556 [2024-10-01 15:58:58.455226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.556 [2024-10-01 15:58:58.455248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.556 [2024-10-01 15:58:58.455654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-10-01 15:58:58.455670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.556 [2024-10-01 15:58:58.455678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.556 [2024-10-01 15:58:58.455824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.556 [2024-10-01 15:58:58.455834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.556 [2024-10-01 15:58:58.455840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.556 [2024-10-01 15:58:58.456099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.556 [2024-10-01 15:58:58.456113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.556 [2024-10-01 15:58:58.456273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.556 [2024-10-01 15:58:58.456283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.556 [2024-10-01 15:58:58.456290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.557 [2024-10-01 15:58:58.456299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.557 [2024-10-01 15:58:58.456306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.557 [2024-10-01 15:58:58.456312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.557 [2024-10-01 15:58:58.456342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.557 [2024-10-01 15:58:58.456349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.557 [2024-10-01 15:58:58.467087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.557 [2024-10-01 15:58:58.467108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.557 [2024-10-01 15:58:58.467432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.557 [2024-10-01 15:58:58.467447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.557 [2024-10-01 15:58:58.467455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.557 [2024-10-01 15:58:58.467579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.557 [2024-10-01 15:58:58.467592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.557 [2024-10-01 15:58:58.467599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.557 [2024-10-01 15:58:58.468273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.557 [2024-10-01 15:58:58.468290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.557 [2024-10-01 15:58:58.468603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.557 [2024-10-01 15:58:58.468614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.557 [2024-10-01 15:58:58.468620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.557 [2024-10-01 15:58:58.468630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.557 [2024-10-01 15:58:58.468637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.557 [2024-10-01 15:58:58.468643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.557 [2024-10-01 15:58:58.468686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.557 [2024-10-01 15:58:58.468694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.557 [2024-10-01 15:58:58.477168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.557 [2024-10-01 15:58:58.477198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.557 [2024-10-01 15:58:58.477479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.557 [2024-10-01 15:58:58.477493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.557 [2024-10-01 15:58:58.477500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.557 [2024-10-01 15:58:58.477847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.557 [2024-10-01 15:58:58.477866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.557 [2024-10-01 15:58:58.477874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.557 [2024-10-01 15:58:58.477883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.557 [2024-10-01 15:58:58.477912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.557 [2024-10-01 15:58:58.477920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.557 [2024-10-01 15:58:58.477926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.557 [2024-10-01 15:58:58.477932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.557 [2024-10-01 15:58:58.477945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.557 [2024-10-01 15:58:58.477952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.557 [2024-10-01 15:58:58.477958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.557 [2024-10-01 15:58:58.477963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.557 [2024-10-01 15:58:58.477975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.557 [2024-10-01 15:58:58.487417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.557 [2024-10-01 15:58:58.487437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.557 [2024-10-01 15:58:58.487599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.557 [2024-10-01 15:58:58.487611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.557 [2024-10-01 15:58:58.487619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.557 [2024-10-01 15:58:58.487832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.557 [2024-10-01 15:58:58.487842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.557 [2024-10-01 15:58:58.487848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.557 [2024-10-01 15:58:58.487860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.557 [2024-10-01 15:58:58.487874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.557 [2024-10-01 15:58:58.487884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.557 [2024-10-01 15:58:58.487890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.557 [2024-10-01 15:58:58.487897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.557 [2024-10-01 15:58:58.487905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.557 [2024-10-01 15:58:58.487910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.557 [2024-10-01 15:58:58.487917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.557 [2024-10-01 15:58:58.487930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.557 [2024-10-01 15:58:58.487937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.557 [2024-10-01 15:58:58.498848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.557 [2024-10-01 15:58:58.498873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.557 [2024-10-01 15:58:58.499040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.557 [2024-10-01 15:58:58.499052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.557 [2024-10-01 15:58:58.499060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.557 [2024-10-01 15:58:58.499228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.557 [2024-10-01 15:58:58.499239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.557 [2024-10-01 15:58:58.499246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.557 [2024-10-01 15:58:58.499257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.557 [2024-10-01 15:58:58.499266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.557 [2024-10-01 15:58:58.499276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.557 [2024-10-01 15:58:58.499282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.557 [2024-10-01 15:58:58.499292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.557 [2024-10-01 15:58:58.499301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.557 [2024-10-01 15:58:58.499307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.557 [2024-10-01 15:58:58.499313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.557 [2024-10-01 15:58:58.499326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.557 [2024-10-01 15:58:58.499332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.557 [2024-10-01 15:58:58.511541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.557 [2024-10-01 15:58:58.511562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.557 [2024-10-01 15:58:58.511972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.557 [2024-10-01 15:58:58.511989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.557 [2024-10-01 15:58:58.511997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.557 [2024-10-01 15:58:58.512211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.557 [2024-10-01 15:58:58.512221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.557 [2024-10-01 15:58:58.512228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.557 [2024-10-01 15:58:58.512482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.557 [2024-10-01 15:58:58.512495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.557 [2024-10-01 15:58:58.512542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.557 [2024-10-01 15:58:58.512551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.557 [2024-10-01 15:58:58.512557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.557 [2024-10-01 15:58:58.512567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.557 [2024-10-01 15:58:58.512572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.557 [2024-10-01 15:58:58.512579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.512593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.512599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.522806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.522827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.523127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.523144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.558 [2024-10-01 15:58:58.523151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.558 [2024-10-01 15:58:58.523366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.523376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.558 [2024-10-01 15:58:58.523386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.558 [2024-10-01 15:58:58.523530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.558 [2024-10-01 15:58:58.523542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.558 [2024-10-01 15:58:58.523680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.558 [2024-10-01 15:58:58.523690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.558 [2024-10-01 15:58:58.523696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.523706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.558 [2024-10-01 15:58:58.523713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.558 [2024-10-01 15:58:58.523719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.523749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.523756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.533712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.533736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.533929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.533944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.558 [2024-10-01 15:58:58.533952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.558 [2024-10-01 15:58:58.534056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.534068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.558 [2024-10-01 15:58:58.534075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.558 [2024-10-01 15:58:58.534206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.558 [2024-10-01 15:58:58.534219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.558 [2024-10-01 15:58:58.534357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.558 [2024-10-01 15:58:58.534367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.558 [2024-10-01 15:58:58.534374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.534383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.558 [2024-10-01 15:58:58.534390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.558 [2024-10-01 15:58:58.534396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.534425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.534433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.544874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.544901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.545298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.545315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.558 [2024-10-01 15:58:58.545323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.558 [2024-10-01 15:58:58.545551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.545562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.558 [2024-10-01 15:58:58.545569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.558 [2024-10-01 15:58:58.545600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.558 [2024-10-01 15:58:58.545610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.558 [2024-10-01 15:58:58.545620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.558 [2024-10-01 15:58:58.545626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.558 [2024-10-01 15:58:58.545633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.545642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.558 [2024-10-01 15:58:58.545648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.558 [2024-10-01 15:58:58.545655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.545668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.545675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.555666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.555687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.555872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.555886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.558 [2024-10-01 15:58:58.555894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.558 [2024-10-01 15:58:58.556081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.556091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.558 [2024-10-01 15:58:58.556098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.558 [2024-10-01 15:58:58.556492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.558 [2024-10-01 15:58:58.556505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.558 [2024-10-01 15:58:58.556734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.558 [2024-10-01 15:58:58.556744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.558 [2024-10-01 15:58:58.556751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.556764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.558 [2024-10-01 15:58:58.556770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.558 [2024-10-01 15:58:58.556776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.556933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.556944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.567946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.567969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.568376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.568394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.558 [2024-10-01 15:58:58.568403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.558 [2024-10-01 15:58:58.568614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.568625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.558 [2024-10-01 15:58:58.568633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.558 [2024-10-01 15:58:58.568665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.558 [2024-10-01 15:58:58.568675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.558 [2024-10-01 15:58:58.568694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.558 [2024-10-01 15:58:58.568702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.558 [2024-10-01 15:58:58.568709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.568718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.558 [2024-10-01 15:58:58.568725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.558 [2024-10-01 15:58:58.568730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.568744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.568752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.578062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.578081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.578242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.578255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.558 [2024-10-01 15:58:58.578262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.558 [2024-10-01 15:58:58.578395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.578405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.558 [2024-10-01 15:58:58.578412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.558 [2024-10-01 15:58:58.579132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.558 [2024-10-01 15:58:58.579147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.558 [2024-10-01 15:58:58.579614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.558 [2024-10-01 15:58:58.579625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.558 [2024-10-01 15:58:58.579632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.579641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.558 [2024-10-01 15:58:58.579647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.558 [2024-10-01 15:58:58.579654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.579822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.579831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.590146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.590167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.590517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.590534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.558 [2024-10-01 15:58:58.590541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.558 [2024-10-01 15:58:58.590735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.590746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.558 [2024-10-01 15:58:58.590753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.558 [2024-10-01 15:58:58.591050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.558 [2024-10-01 15:58:58.591065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.558 [2024-10-01 15:58:58.591215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.558 [2024-10-01 15:58:58.591225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.558 [2024-10-01 15:58:58.591232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.591242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.558 [2024-10-01 15:58:58.591248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.558 [2024-10-01 15:58:58.591254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.558 [2024-10-01 15:58:58.591285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.591292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.558 [2024-10-01 15:58:58.601649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.601670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.558 [2024-10-01 15:58:58.602090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.558 [2024-10-01 15:58:58.602107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.559 [2024-10-01 15:58:58.602115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.602260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.602270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.559 [2024-10-01 15:58:58.602277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.602530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.602543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.602691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.602701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.602708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.602717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.602724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.602730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.602760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.602768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.613079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.613101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.613479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.613495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.559 [2024-10-01 15:58:58.613503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.613640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.613650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.559 [2024-10-01 15:58:58.613656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.613840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.613855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.614001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.614012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.614018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.614028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.614034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.614044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.614074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.614082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.624615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.624637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.624971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.624989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.559 [2024-10-01 15:58:58.624997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.625213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.625224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.559 [2024-10-01 15:58:58.625231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.625485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.625499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.625535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.625543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.625549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.625559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.625565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.625571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.625700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.625709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.636130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.636151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.636508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.636524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.559 [2024-10-01 15:58:58.636531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.636744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.636755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.559 [2024-10-01 15:58:58.636762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.636951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.636970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.637112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.637123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.637129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.637139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.637145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.637151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.637293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.637302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.647675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.647696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.648081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.648098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.559 [2024-10-01 15:58:58.648105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.648271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.648281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.559 [2024-10-01 15:58:58.648288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.648570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.648583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.648734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.648744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.648750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.648760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.648766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.648772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.648803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.648810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.659194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.659215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.659563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.659579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.559 [2024-10-01 15:58:58.659591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.659809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.659819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.559 [2024-10-01 15:58:58.659826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.660083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.660098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.660134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.660142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.660148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.660157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.660163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.660169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.660298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.660307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.670333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.670353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.670590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.670603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.559 [2024-10-01 15:58:58.670610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.670803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.670813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.559 [2024-10-01 15:58:58.670819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.670831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.670840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.670850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.670856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.670867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.670876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.670882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.670890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.670904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.670911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.683066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.683087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.683323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.683335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.559 [2024-10-01 15:58:58.683343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.683549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.683559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.559 [2024-10-01 15:58:58.683566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.683578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.683588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.683597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.683604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.683610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.683618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.683624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.683631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.683644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.683651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.559 [2024-10-01 15:58:58.693764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.693785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.559 [2024-10-01 15:58:58.693999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.694011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.559 [2024-10-01 15:58:58.694019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.694213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.559 [2024-10-01 15:58:58.694224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.559 [2024-10-01 15:58:58.694230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.559 [2024-10-01 15:58:58.694243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.694252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.559 [2024-10-01 15:58:58.694268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.694274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.559 [2024-10-01 15:58:58.694281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.559 [2024-10-01 15:58:58.694289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.559 [2024-10-01 15:58:58.694295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.694301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.694314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.694321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.706386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.706407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.706750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.706766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.560 [2024-10-01 15:58:58.706774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.706991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.707002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.560 [2024-10-01 15:58:58.707010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.707363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.707377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.707635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.707646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.707653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.707663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.707669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.707675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.707715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.707723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.718101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.718122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.718458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.718475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.560 [2024-10-01 15:58:58.718482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.718707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.718718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.560 [2024-10-01 15:58:58.718725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.718879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.718893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.719062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.719073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.719080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.719089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.719095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.719102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.719175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.719185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.729114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.729135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.729339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.729352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.560 [2024-10-01 15:58:58.729359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.729493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.729503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.560 [2024-10-01 15:58:58.729510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.729639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.729651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.729800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.729811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.729817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.729827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.729833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.729839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.729875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.729886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.739397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.739418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.739663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.739677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.560 [2024-10-01 15:58:58.739684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.739901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.739912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.560 [2024-10-01 15:58:58.739919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.739931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.739940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.739950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.739956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.739963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.739971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.739977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.739983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.739997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.740003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.752101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.752122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.752281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.752293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.560 [2024-10-01 15:58:58.752300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.752516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.752526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.560 [2024-10-01 15:58:58.752532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.752544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.752553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.752563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.752572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.752578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.752587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.752592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.752598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.752612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.752619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.764057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.764079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.764475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.764492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.560 [2024-10-01 15:58:58.764500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.764667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.764678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.560 [2024-10-01 15:58:58.764685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.764833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.764846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.764875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.764883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.764890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.764899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.764905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.764911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.764926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.764932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.775522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.775544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.775840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.775855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.560 [2024-10-01 15:58:58.775868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.775955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.775968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.560 [2024-10-01 15:58:58.775975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.776119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.776131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.776157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.776164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.776170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.776179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.776185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.776191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.776214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.776221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.786473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.786493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.786707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.786720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.560 [2024-10-01 15:58:58.786727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.786987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.786998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.560 [2024-10-01 15:58:58.787005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.787017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.787026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.787041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.787048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.787054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.787063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.787069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.787075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.787089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.787095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.560 [2024-10-01 15:58:58.796898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.796919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.560 [2024-10-01 15:58:58.797128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.797140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.560 [2024-10-01 15:58:58.797148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.797337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.560 [2024-10-01 15:58:58.797347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.560 [2024-10-01 15:58:58.797354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.560 [2024-10-01 15:58:58.797365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.797374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.560 [2024-10-01 15:58:58.797385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.560 [2024-10-01 15:58:58.797391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.560 [2024-10-01 15:58:58.797397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.560 [2024-10-01 15:58:58.797405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.797411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.797417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.797430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.797436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.809826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.809849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.810538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.810557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.561 [2024-10-01 15:58:58.810565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.810713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.810722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.561 [2024-10-01 15:58:58.810729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.811037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.811052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.811203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.811213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.811223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.811233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.811239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.811245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.811275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.811283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.821236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.821260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.821658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.821675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.561 [2024-10-01 15:58:58.821683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.821878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.821890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.561 [2024-10-01 15:58:58.821897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.822046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.822059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.822086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.822093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.822100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.822109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.822116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.822122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.822146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.822153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.832051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.832074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.832194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.832207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.561 [2024-10-01 15:58:58.832214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.832436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.832447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.561 [2024-10-01 15:58:58.832458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.832471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.832481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.832491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.832497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.832503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.832512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.832517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.832523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.832537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.832544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.843051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.843074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.843250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.843265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.561 [2024-10-01 15:58:58.843273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.843466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.843476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.561 [2024-10-01 15:58:58.843483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.843644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.843658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.843809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.843821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.843827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.843837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.843844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.843850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.843887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.843895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.853134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.854122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.854293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.854308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.561 [2024-10-01 15:58:58.854315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.854886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.854903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.561 [2024-10-01 15:58:58.854911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.854920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.855221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.855234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.855240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.855247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.855289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.855297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.855303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.855309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.855322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.864723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.864899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.865084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.865099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.561 [2024-10-01 15:58:58.865107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.865425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.865440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.561 [2024-10-01 15:58:58.865448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.865457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.865601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.865611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.865618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.865624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.865769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.865779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.865785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.865791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.865820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.876132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.876153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.876604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.876620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.561 [2024-10-01 15:58:58.876628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.876813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.876824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.561 [2024-10-01 15:58:58.876831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.876984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.876997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.877024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.877031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.877039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.877047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.877053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.877059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.877240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.877250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.887562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.887584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.888097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.888114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.561 [2024-10-01 15:58:58.888122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.888222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.888232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.561 [2024-10-01 15:58:58.888239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.888405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.888417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.888443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.888450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.888457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.888466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.888472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.888478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.888491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.888498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.899201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.899223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.899533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.899550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.561 [2024-10-01 15:58:58.899557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.899784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.899794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.561 [2024-10-01 15:58:58.899801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.899829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.899840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.561 [2024-10-01 15:58:58.899849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.899856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.899867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.899876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.561 [2024-10-01 15:58:58.899882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.561 [2024-10-01 15:58:58.899888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.561 [2024-10-01 15:58:58.899902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.899909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.561 [2024-10-01 15:58:58.909738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.909759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.561 [2024-10-01 15:58:58.909991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.910004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.561 [2024-10-01 15:58:58.910011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.561 [2024-10-01 15:58:58.910160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.561 [2024-10-01 15:58:58.910170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.561 [2024-10-01 15:58:58.910177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.910308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.910320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.910346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.910354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.910360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.910369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.910375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.910381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.910394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:58.910400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:58.921024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:58.921046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:58.921375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:58.921391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.562 [2024-10-01 15:58:58.921399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.921472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:58.921481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.562 [2024-10-01 15:58:58.921488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.921632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.921645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.921791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.921800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.921807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.921816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.921826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.921832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.921859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:58.921873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:58.931785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:58.931805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:58.931988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:58.932001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.562 [2024-10-01 15:58:58.932008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.932154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:58.932164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.562 [2024-10-01 15:58:58.932170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.932508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.932522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.932681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.932691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.932698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.932707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.932713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.932719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.932899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:58.932909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:58.942501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:58.942521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:58.942688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:58.942700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.562 [2024-10-01 15:58:58.942707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.942874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:58.942885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.562 [2024-10-01 15:58:58.942892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.942904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.942916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.942926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.942932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.942938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.942947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.942953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.942959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.942973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:58.942979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:58.954764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:58.954785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:58.955164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:58.955181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.562 [2024-10-01 15:58:58.955188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.955360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:58.955371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.562 [2024-10-01 15:58:58.955378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.955561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.955576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.955716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.955727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.955734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.955743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.955749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.955755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.955905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:58.955916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:58.966317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:58.966339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:58.966679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:58.966696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.562 [2024-10-01 15:58:58.966707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.966871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:58.966881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.562 [2024-10-01 15:58:58.966887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.967142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.967156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.967192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.967200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.967207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.967216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.967222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.967228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.967356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:58.967365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:58.977779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:58.977801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:58.978122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:58.978139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.562 [2024-10-01 15:58:58.978147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.978285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:58.978294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.562 [2024-10-01 15:58:58.978301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.978482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.978497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.978637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.978648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.978654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.978664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.978670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.978683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.978826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:58.978835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 11339.86 IOPS, 44.30 MiB/s [2024-10-01 15:58:58.989362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:58.989385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:58.989678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:58.989694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.562 [2024-10-01 15:58:58.989702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.989796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:58.989807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.562 [2024-10-01 15:58:58.989814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.562 [2024-10-01 15:58:58.989963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.989976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.562 [2024-10-01 15:58:58.989998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.990005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.990012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.990021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.562 [2024-10-01 15:58:58.990027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.562 [2024-10-01 15:58:58.990033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.562 [2024-10-01 15:58:58.990161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:58.990170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.562 [2024-10-01 15:58:59.000564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:59.000585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.562 [2024-10-01 15:58:59.000748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.562 [2024-10-01 15:58:59.000761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.562 [2024-10-01 15:58:59.000768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.000936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.000947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.563 [2024-10-01 15:58:59.000954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.000966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.000975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.000989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.000995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.001001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.001010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.001016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.001022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.001036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.001042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.012974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.012996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.013113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.013125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.563 [2024-10-01 15:58:59.013132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.013231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.013240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.563 [2024-10-01 15:58:59.013246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.013258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.013267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.013285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.013292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.013298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.013307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.013313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.013319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.013332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.013339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.024129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.024150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.024950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.024969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.563 [2024-10-01 15:58:59.024980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.025132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.025142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.563 [2024-10-01 15:58:59.025149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.025800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.025817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.026186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.026198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.026204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.026214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.026220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.026227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.026284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.026293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.034515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.034545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.034775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.034797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.563 [2024-10-01 15:58:59.034805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.034960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.034971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.563 [2024-10-01 15:58:59.034978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.034987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.035141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.035152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.035158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.035164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.035278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.035288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.035294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.035304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.035444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.045624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.045647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.045803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.045816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.563 [2024-10-01 15:58:59.045823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.046022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.046033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.563 [2024-10-01 15:58:59.046040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.046312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.046327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.046506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.046517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.046523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.046533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.046539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.046546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.046689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.046699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.057915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.057936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.058459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.058476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.563 [2024-10-01 15:58:59.058484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.058580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.058590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.563 [2024-10-01 15:58:59.058597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.058860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.058882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.059031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.059044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.059052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.059062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.059068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.059074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.059103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.059111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.068807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.068829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.069116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.069133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.563 [2024-10-01 15:58:59.069141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.069279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.069289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.563 [2024-10-01 15:58:59.069295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.069440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.069452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.069479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.069487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.069493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.069502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.069507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.069514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.069642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.069651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.080039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.080061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.080416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.080432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.563 [2024-10-01 15:58:59.080440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.080641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.080652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.563 [2024-10-01 15:58:59.080659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.080802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.080814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.080958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.080969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.080975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.080985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.080991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.080997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.081026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.081034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.090800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.090821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.090935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.090948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.563 [2024-10-01 15:58:59.090956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.091101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.091111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.563 [2024-10-01 15:58:59.091117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.091129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.091138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.091333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.091344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.091350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.091359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.091365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.091371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.091502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.091511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.563 [2024-10-01 15:58:59.101663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.101685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.563 [2024-10-01 15:58:59.101981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.101998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.563 [2024-10-01 15:58:59.102006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.102092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.563 [2024-10-01 15:58:59.102109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.563 [2024-10-01 15:58:59.102116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.563 [2024-10-01 15:58:59.102260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.102272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.563 [2024-10-01 15:58:59.102308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.102316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.102323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.102332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.563 [2024-10-01 15:58:59.102338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.563 [2024-10-01 15:58:59.102344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.563 [2024-10-01 15:58:59.102357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.102364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.112826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.112848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.113138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.113155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.564 [2024-10-01 15:58:59.113163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.113307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.113317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.564 [2024-10-01 15:58:59.113323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.113468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.113481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.113618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.113628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.113639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.113648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.113654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.113660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.113690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.113697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.123914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.123935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.124107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.124119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.564 [2024-10-01 15:58:59.124127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.124213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.124223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.564 [2024-10-01 15:58:59.124230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.124241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.124250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.124260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.124266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.124273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.124282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.124289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.124295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.124308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.124316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.134739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.134761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.134979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.134993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.564 [2024-10-01 15:58:59.135001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.135093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.135103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.564 [2024-10-01 15:58:59.135113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.135244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.135256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.135601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.135613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.135620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.135629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.135635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.135642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.135797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.135807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.146008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.146030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.146219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.146232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.564 [2024-10-01 15:58:59.146239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.146337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.146347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.564 [2024-10-01 15:58:59.146354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.146514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.146527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.146665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.146675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.146681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.146691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.146697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.146703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.146732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.146740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.156234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.156263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.156423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.156435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.564 [2024-10-01 15:58:59.156443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.156533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.156543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.564 [2024-10-01 15:58:59.156549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.156560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.156570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.156580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.156586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.156592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.156600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.156606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.156611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.156625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.156632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.169262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.169284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.169689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.169707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.564 [2024-10-01 15:58:59.169714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.169888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.169898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.564 [2024-10-01 15:58:59.169905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.170364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.170379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.170539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.170549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.170556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.170569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.170575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.170581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.170723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.170733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.180238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.180259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.180422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.180435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.564 [2024-10-01 15:58:59.180442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.180579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.180589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.564 [2024-10-01 15:58:59.180596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.180607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.180616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.180626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.180633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.180640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.180649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.180655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.180661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.180675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.180685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.191704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.191725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.192211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.192230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.564 [2024-10-01 15:58:59.192238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.192374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.192384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.564 [2024-10-01 15:58:59.192391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.192656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.192671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.192818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.192829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.192835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.192845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.192851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.192857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.192894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.192902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.203749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.203770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.204127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.204144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.564 [2024-10-01 15:58:59.204152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.204252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.204261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.564 [2024-10-01 15:58:59.204268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.564 [2024-10-01 15:58:59.204424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.204436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.564 [2024-10-01 15:58:59.204574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.204585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.204592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.204602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.564 [2024-10-01 15:58:59.204607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.564 [2024-10-01 15:58:59.204614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.564 [2024-10-01 15:58:59.204643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.204650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.564 [2024-10-01 15:58:59.214945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.214966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.564 [2024-10-01 15:58:59.215086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.564 [2024-10-01 15:58:59.215099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.565 [2024-10-01 15:58:59.215107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.215255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.215264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.565 [2024-10-01 15:58:59.215271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.215401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.215413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.215551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.215560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.215567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.565 [2024-10-01 15:58:59.215576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.215582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.215588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.565 [2024-10-01 15:58:59.215618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.565 [2024-10-01 15:58:59.215626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.565 [2024-10-01 15:58:59.225632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.565 [2024-10-01 15:58:59.225653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.565 [2024-10-01 15:58:59.225832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.225845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.565 [2024-10-01 15:58:59.225853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.226007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.226017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.565 [2024-10-01 15:58:59.226024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.226155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.226167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.226305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.226315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.226322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.565 [2024-10-01 15:58:59.226331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.226340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.226347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.565 [2024-10-01 15:58:59.226376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.565 [2024-10-01 15:58:59.226384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.565 [2024-10-01 15:58:59.237511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.565 [2024-10-01 15:58:59.237532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.565 [2024-10-01 15:58:59.237697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.237709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.565 [2024-10-01 15:58:59.237717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.237855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.237870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.565 [2024-10-01 15:58:59.237877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.237889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.237898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.237908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.237915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.237921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.565 [2024-10-01 15:58:59.237929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.237935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.237940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.565 [2024-10-01 15:58:59.237953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.565 [2024-10-01 15:58:59.237960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.565 [2024-10-01 15:58:59.249653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.565 [2024-10-01 15:58:59.249675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.565 [2024-10-01 15:58:59.249889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.249902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.565 [2024-10-01 15:58:59.249910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.250133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.250144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.565 [2024-10-01 15:58:59.250150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.250170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.250184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.250193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.250199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.250205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.565 [2024-10-01 15:58:59.250214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.250220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.250226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.565 [2024-10-01 15:58:59.250239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.565 [2024-10-01 15:58:59.250246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.565 [2024-10-01 15:58:59.261492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.565 [2024-10-01 15:58:59.261514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.565 [2024-10-01 15:58:59.261723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.261736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.565 [2024-10-01 15:58:59.261744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.261976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.261987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.565 [2024-10-01 15:58:59.261994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.262005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.262015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.262025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.262031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.262037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.565 [2024-10-01 15:58:59.262046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.262052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.262058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.565 [2024-10-01 15:58:59.262485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.565 [2024-10-01 15:58:59.262496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.565 [2024-10-01 15:58:59.273064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.565 [2024-10-01 15:58:59.273085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.565 [2024-10-01 15:58:59.273297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.273315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.565 [2024-10-01 15:58:59.273323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.273515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.273526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.565 [2024-10-01 15:58:59.273532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.273984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.273999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.274166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.274176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.274183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.565 [2024-10-01 15:58:59.274192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.274199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.274205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.565 [2024-10-01 15:58:59.274381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.565 [2024-10-01 15:58:59.274391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.565 [2024-10-01 15:58:59.283986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.565 [2024-10-01 15:58:59.284006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.565 [2024-10-01 15:58:59.284267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.284280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.565 [2024-10-01 15:58:59.284287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.284480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.284490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.565 [2024-10-01 15:58:59.284497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.284509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.284518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.284528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.284534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.284540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.565 [2024-10-01 15:58:59.284548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.284554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.284564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.565 [2024-10-01 15:58:59.284577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.565 [2024-10-01 15:58:59.284584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.565 [2024-10-01 15:58:59.296052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.565 [2024-10-01 15:58:59.296074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.565 [2024-10-01 15:58:59.296458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.296475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.565 [2024-10-01 15:58:59.296483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.296627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.565 [2024-10-01 15:58:59.296637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.565 [2024-10-01 15:58:59.296643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.565 [2024-10-01 15:58:59.296741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.296752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.565 [2024-10-01 15:58:59.297571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.565 [2024-10-01 15:58:59.297586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.565 [2024-10-01 15:58:59.297593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.297602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.297609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.297615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.298050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.298063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.306343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.306365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.306605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.306618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.567 [2024-10-01 15:58:59.306626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.306708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.306717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.567 [2024-10-01 15:58:59.306724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.306735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.306745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.306759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.306765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.306771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.306780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.306786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.306792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.306805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.306812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.317371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.317393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.317785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.317802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.567 [2024-10-01 15:58:59.317810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.317894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.317905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.567 [2024-10-01 15:58:59.317912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.318056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.318068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.318094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.318101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.318107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.318116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.318122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.318129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.318142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.318149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.328451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.328472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.328600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.328613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.567 [2024-10-01 15:58:59.328623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.328766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.328776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.567 [2024-10-01 15:58:59.328783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.328919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.328932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.328958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.328966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.328973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.328981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.328987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.328994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.329007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.329014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.339120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.339142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.339511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.339527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.567 [2024-10-01 15:58:59.339535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.339686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.339696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.567 [2024-10-01 15:58:59.339703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.339849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.339869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.339897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.339904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.339911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.339920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.339926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.339932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.339950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.339956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.349583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.349604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.349784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.349797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.567 [2024-10-01 15:58:59.349805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.349969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.349980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.567 [2024-10-01 15:58:59.349987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.350481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.350495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.350981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.350993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.351000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.351010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.351017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.351023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.351190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.351199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.360611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.360632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.360867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.360880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.567 [2024-10-01 15:58:59.360887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.361037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.361046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.567 [2024-10-01 15:58:59.361053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.362019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.362035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.362260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.362274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.362281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.362290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.362296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.362302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.362454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.362464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.372653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.372675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.373027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.373044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.567 [2024-10-01 15:58:59.373051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.373266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.373276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.567 [2024-10-01 15:58:59.373283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.373576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.373590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.373745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.373755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.373761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.373771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.373777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.373784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.373814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.373821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.384162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.384184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.384523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.384541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.567 [2024-10-01 15:58:59.384548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.384747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.384758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.567 [2024-10-01 15:58:59.384764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.385023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.385037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.385185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.385195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.385202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.385211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.385217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.385223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.385252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.385260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.567 [2024-10-01 15:58:59.395416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.395437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.567 [2024-10-01 15:58:59.395582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.395595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.567 [2024-10-01 15:58:59.395603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.395703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.567 [2024-10-01 15:58:59.395712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.567 [2024-10-01 15:58:59.395719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.567 [2024-10-01 15:58:59.395730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.395740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.567 [2024-10-01 15:58:59.395749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.395756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.395762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.395771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.567 [2024-10-01 15:58:59.395776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.567 [2024-10-01 15:58:59.395782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.567 [2024-10-01 15:58:59.395796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.395805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.406527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.406548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.406661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.406674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.568 [2024-10-01 15:58:59.406682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.406900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.406910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.568 [2024-10-01 15:58:59.406917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.406929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.406938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.406948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.406954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.406960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.406969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.406975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.406981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.406994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.407001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.416962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.416983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.417142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.417155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.568 [2024-10-01 15:58:59.417162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.417321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.417331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.568 [2024-10-01 15:58:59.417338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.417349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.417358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.417368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.417375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.417384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.417393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.417399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.417405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.417418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.417425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.428207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.428228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.428885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.428903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.568 [2024-10-01 15:58:59.428911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.429056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.429066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.568 [2024-10-01 15:58:59.429073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.429269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.429284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.429373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.429381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.429388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.429397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.429403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.429409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.430082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.430094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.438623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.438645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.438825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.438837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.568 [2024-10-01 15:58:59.438845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.439075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.439087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.568 [2024-10-01 15:58:59.439097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.439282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.439296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.439323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.439330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.439337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.439346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.439352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.439358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.439372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.439378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.449916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.449937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.450222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.450238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.568 [2024-10-01 15:58:59.450246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.450439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.450449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.568 [2024-10-01 15:58:59.450456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.450486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.450496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.450515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.450522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.450529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.450537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.450543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.450549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.450562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.450569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.460783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.460808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.460990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.461003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.568 [2024-10-01 15:58:59.461011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.461202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.461211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.568 [2024-10-01 15:58:59.461218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.461230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.461239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.461249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.461255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.461262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.461271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.461276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.461282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.461296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.461303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.470872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.470901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.471109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.471121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.568 [2024-10-01 15:58:59.471128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.471272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.471282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.568 [2024-10-01 15:58:59.471289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.471297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.471308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.471316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.471322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.471328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.471344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.471351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.471356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.471362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.471375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.482170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.482191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.482375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.482389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.568 [2024-10-01 15:58:59.482396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.482564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.482574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.568 [2024-10-01 15:58:59.482580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.482922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.482938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.483198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.483208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.483215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.483224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.483230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.483236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.483277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.483285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.568 [2024-10-01 15:58:59.493633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.493655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.568 [2024-10-01 15:58:59.494031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.494048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.568 [2024-10-01 15:58:59.494055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.494180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.568 [2024-10-01 15:58:59.494189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.568 [2024-10-01 15:58:59.494199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.568 [2024-10-01 15:58:59.494228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.494239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.568 [2024-10-01 15:58:59.494248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.494254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.494261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.568 [2024-10-01 15:58:59.494269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.568 [2024-10-01 15:58:59.494275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.568 [2024-10-01 15:58:59.494281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.494447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.494457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.503713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.503742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.503886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.503899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.569 [2024-10-01 15:58:59.503906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.504124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.504134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.569 [2024-10-01 15:58:59.504140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.504149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.505116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.505131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.505137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.505143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.505619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.505630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.505636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.505642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.505817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.515280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.515301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.515685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.515701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.569 [2024-10-01 15:58:59.515709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.515855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.515870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.569 [2024-10-01 15:58:59.515877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.516140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.516154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.516191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.516198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.516204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.516213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.516219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.516225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.516354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.516363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.526753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.526774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.527135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.527152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.569 [2024-10-01 15:58:59.527159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.527282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.527292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.569 [2024-10-01 15:58:59.527298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.527443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.527455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.527592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.527602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.527608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.527618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.527627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.527634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.527663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.527671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.538266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.538287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.538670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.538686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.569 [2024-10-01 15:58:59.538694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.538908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.538920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.569 [2024-10-01 15:58:59.538927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.539190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.539204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.539241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.539248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.539255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.539264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.539270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.539276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.539404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.539413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.549803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.549824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.550157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.550174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.569 [2024-10-01 15:58:59.550181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.550396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.550406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.569 [2024-10-01 15:58:59.550413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.550565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.550579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.550716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.550726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.550733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.550742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.550748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.550754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.550901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.550911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.561335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.561356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.561764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.561780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.569 [2024-10-01 15:58:59.561788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.561956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.561966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.569 [2024-10-01 15:58:59.561973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.562240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.562254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.562291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.562298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.562305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.562314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.562320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.562326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.562454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.562463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.572859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.572885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.573261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.573281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.569 [2024-10-01 15:58:59.573288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.573426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.573435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.569 [2024-10-01 15:58:59.573442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.573615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.573629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.573769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.573779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.573786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.573795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.573801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.573807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.573956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.573966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.584203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.584223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.584459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.584471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.569 [2024-10-01 15:58:59.584478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.584571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.584580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.569 [2024-10-01 15:58:59.584587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.584598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.584608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.584618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.584625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.584631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.584640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.584645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.584655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.584669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.584675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.595410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.595431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.595642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.595654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.569 [2024-10-01 15:58:59.595662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.595750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.595759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.569 [2024-10-01 15:58:59.595766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.595777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.595786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.595796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.595802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.595809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.595817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.595823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.569 [2024-10-01 15:58:59.595829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.569 [2024-10-01 15:58:59.595842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.595849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.569 [2024-10-01 15:58:59.605697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.605718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.569 [2024-10-01 15:58:59.605945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.605959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.569 [2024-10-01 15:58:59.605966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.606055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.569 [2024-10-01 15:58:59.606064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.569 [2024-10-01 15:58:59.606071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.569 [2024-10-01 15:58:59.606202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.606217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.569 [2024-10-01 15:58:59.606355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.569 [2024-10-01 15:58:59.606364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.606371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.606380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.606386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.606391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.606421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.606429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.618377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.618397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.618682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.618698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.570 [2024-10-01 15:58:59.618706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.618874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.618885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.570 [2024-10-01 15:58:59.618892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.619036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.619049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.619074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.619082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.619088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.619098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.619104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.619110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.619246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.619255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.629144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.629165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.629455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.629471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.570 [2024-10-01 15:58:59.629482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.629623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.629633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.570 [2024-10-01 15:58:59.629639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.629782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.629794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.629939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.629949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.629956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.629965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.629971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.629977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.630006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.630014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.640978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.640999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.641244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.641257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.570 [2024-10-01 15:58:59.641264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.641408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.641418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.570 [2024-10-01 15:58:59.641425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.641436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.641445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.641455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.641461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.641468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.641476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.641482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.641488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.641505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.641511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.652557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.652580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.652845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.652859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.570 [2024-10-01 15:58:59.652874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.653091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.653102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.570 [2024-10-01 15:58:59.653108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.653120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.653130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.653147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.653154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.653161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.653169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.653175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.653181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.653195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.653202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.664272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.664292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.664529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.664541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.570 [2024-10-01 15:58:59.664549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.664646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.664656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.570 [2024-10-01 15:58:59.664663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.664676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.664685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.664699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.664705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.664712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.664720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.664727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.664733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.664747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.664753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.676358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.676380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.676637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.676655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.570 [2024-10-01 15:58:59.676662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.676824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.676837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.570 [2024-10-01 15:58:59.676845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.677405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.677421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.677723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.677734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.677740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.677750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.677756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.677762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.677923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.677933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.686483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.686504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.686662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.686675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.570 [2024-10-01 15:58:59.686682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.686781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.686791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.570 [2024-10-01 15:58:59.686797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.686809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.686818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.686828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.686833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.686840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.686848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.686854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.686860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.686880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.686887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.696967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.696988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.697200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.697213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.570 [2024-10-01 15:58:59.697221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.697416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.697426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.570 [2024-10-01 15:58:59.697433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.698053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.698071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.698601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.698613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.698620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.698629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.698635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.698642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.698936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.698951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.709600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.709620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.710010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.710027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.570 [2024-10-01 15:58:59.710034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.710256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.570 [2024-10-01 15:58:59.710266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.570 [2024-10-01 15:58:59.710273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.570 [2024-10-01 15:58:59.710385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.710398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.570 [2024-10-01 15:58:59.710517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.710526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.710532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.710542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.570 [2024-10-01 15:58:59.710548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.570 [2024-10-01 15:58:59.710554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.570 [2024-10-01 15:58:59.710696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.710705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.570 [2024-10-01 15:58:59.720450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.720472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.570 [2024-10-01 15:58:59.721007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.721026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.571 [2024-10-01 15:58:59.721034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.721162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.721171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.571 [2024-10-01 15:58:59.721178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.721340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.721353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.721379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.721386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.721396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.721405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.721411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.721417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.721431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.721438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.730531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.730671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.730929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.730945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.571 [2024-10-01 15:58:59.730953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.731301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.731315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.571 [2024-10-01 15:58:59.731323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.731332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.731716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.731729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.731735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.731742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.731926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.731937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.731943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.731949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.732091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.742115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.742136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.742347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.742360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.571 [2024-10-01 15:58:59.742367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.742458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.742471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.571 [2024-10-01 15:58:59.742478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.742826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.742840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.743005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.743016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.743022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.743032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.743038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.743045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.743247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.743258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.753493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.753515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.753708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.753721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.571 [2024-10-01 15:58:59.753730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.753874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.753886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.571 [2024-10-01 15:58:59.753893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.753905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.753914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.754362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.754373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.754379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.754389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.754396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.754403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.754575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.754585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.764402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.764423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.764680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.764694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.571 [2024-10-01 15:58:59.764701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.764851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.764861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.571 [2024-10-01 15:58:59.764875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.764886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.764895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.764904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.764912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.764919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.764928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.764934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.764940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.764954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.764962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.776054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.776076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.776242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.776256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.571 [2024-10-01 15:58:59.776265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.776484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.776495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.571 [2024-10-01 15:58:59.776502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.776514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.776523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.776533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.776539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.776549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.776557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.776563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.776570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.776584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.776590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.787494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.787516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.787776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.787791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.571 [2024-10-01 15:58:59.787799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.787871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.787882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.571 [2024-10-01 15:58:59.787889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.787900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.787910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.787920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.787927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.787933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.787942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.787949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.787956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.787969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.787975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.798922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.798944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.799235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.799251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.571 [2024-10-01 15:58:59.799259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.799454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.799466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.571 [2024-10-01 15:58:59.799476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.799943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.799959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.800120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.800131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.800139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.800149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.800155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.800162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.800304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.800314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.809370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.809391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.809595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.809609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.571 [2024-10-01 15:58:59.809617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.809751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.809761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.571 [2024-10-01 15:58:59.809768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.809779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.809789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.809800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.809806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.809813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.809822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.809828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.571 [2024-10-01 15:58:59.809835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.571 [2024-10-01 15:58:59.809849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.809856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.571 [2024-10-01 15:58:59.822059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.822082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.571 [2024-10-01 15:58:59.822323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.571 [2024-10-01 15:58:59.822336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.571 [2024-10-01 15:58:59.822344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.571 [2024-10-01 15:58:59.822947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.571 [2024-10-01 15:58:59.823134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.571 [2024-10-01 15:58:59.823145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.823152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.823299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:58:59.833483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.833744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.833760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.833768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.833791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.833806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.833813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.833819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.834274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:58:59.844974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.845210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.845227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.845236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.845252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.845266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.845273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.845279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.845295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:58:59.855987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.856121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.856136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.856143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.856165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.856179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.856186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.856192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.856208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:58:59.868301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.868647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.868665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.868673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.868819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.868973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.868984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.868991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.869025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:58:59.879880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.880058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.880073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.880081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.880122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.880141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.880147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.880154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.880182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:58:59.892308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.892885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.892906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.892914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.893184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.893282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.893293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.893303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.893418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:58:59.905520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.905833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.905851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.905858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.905897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.905913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.905920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.905926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.905943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:58:59.916007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.916234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.916249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.916257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.916272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.916287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.916294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.916300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.916316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:58:59.928626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.928821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.928845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.928852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.929141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.929298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.929309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.929315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.929350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:58:59.939619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.939745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.939762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.939770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.939783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.939794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.939800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.939806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.939819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:58:59.940952] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.572 [2024-10-01 15:58:59.950618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.950867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.950883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.950891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.950904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.950915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.950921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.950928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.950940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:58:59.962972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.963338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.963356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.963363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.963505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.963534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.963541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.963548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.963562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:58:59.973972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.974146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.974162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.974170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.974305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.974335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.974342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.974349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.974363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:58:59.984435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.984654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.984669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.984677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.984689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.984699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.984705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.984711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.984725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 11362.88 IOPS, 44.39 MiB/s [2024-10-01 15:58:59.995695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:58:59.995925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:58:59.995942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:58:59.995950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:58:59.995963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:58:59.995974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:58:59.995980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:58:59.995986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:58:59.996000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:59:00.009317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:59:00.009578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:59:00.009596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:59:00.009607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:59:00.009622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:59:00.009635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:59:00.009642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:59:00.009654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:59:00.009670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:59:00.020789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:59:00.021042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:59:00.021058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:59:00.021066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:59:00.021079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:59:00.021090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:59:00.021097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:59:00.021103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:59:00.021117] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:59:00.034387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:59:00.034817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:59:00.034835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:59:00.034844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:59:00.035344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:59:00.035798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:59:00.035810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:59:00.035818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:59:00.035986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:59:00.046637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:59:00.047007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:59:00.047026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.572 [2024-10-01 15:59:00.047034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.572 [2024-10-01 15:59:00.047210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.572 [2024-10-01 15:59:00.047355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.572 [2024-10-01 15:59:00.047365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.572 [2024-10-01 15:59:00.047372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.572 [2024-10-01 15:59:00.047405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.572 [2024-10-01 15:59:00.056704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.572 [2024-10-01 15:59:00.056980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.572 [2024-10-01 15:59:00.057000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.057008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.057978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.058294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.058305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.058312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.058617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.070549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.070926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.070945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.070953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.071098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.071128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.071136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.071143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.071157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.081724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.081973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.081991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.081999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.082142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.082172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.082179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.082186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.082200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.093269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.093510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.093525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.093533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.093545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.093559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.093566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.093572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.093585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.105933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.106120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.106135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.106143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.106155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.106166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.106172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.106179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.106191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.117094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.117307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.117322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.117330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.117342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.117354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.117360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.117366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.117379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.128798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.129020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.129037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.129044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.129057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.129068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.129075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.129081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.129097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.141468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.141789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.141807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.141814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.141988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.142019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.142027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.142034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.142047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.152114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.152284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.152298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.152305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.152317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.152328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.152335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.152341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.152354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.164640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.164881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.164897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.164905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.164917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.164928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.164934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.164941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.164954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.176587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.177022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.177041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.177053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.177197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.177556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.177568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.177574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.177618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.187220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.187474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.187489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.187497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.187509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.187520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.187526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.187532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.187545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.198521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.198814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.198830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.198838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.198850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.198866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.198873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.198880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.198894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.210204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.210466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.210484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.210492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.210645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.210674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.210685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.210692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.210705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.221275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.221526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.221543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.221551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.221681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.221712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.221719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.221725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.221739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.232894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.233034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.233049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.233057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.233068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.233079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.233085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.233091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.233104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.244986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.245226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.245242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.245250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.245263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.245437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.245446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.245453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.245647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.255052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.573 [2024-10-01 15:59:00.255179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.573 [2024-10-01 15:59:00.255193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.573 [2024-10-01 15:59:00.255200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.573 [2024-10-01 15:59:00.255354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.573 [2024-10-01 15:59:00.255385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.573 [2024-10-01 15:59:00.255393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.573 [2024-10-01 15:59:00.255400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.573 [2024-10-01 15:59:00.255414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.573 [2024-10-01 15:59:00.266394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.266588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.266604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.266612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.266624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.266638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.266645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.266651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.266664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.277447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.277616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.277630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.277638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.277650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.277661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.277668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.277674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.277687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.289258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.289633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.289651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.289659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.289836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.289884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.289893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.289899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.289913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.300818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.301026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.301044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.301052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.301183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.301212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.301220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.301226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.301240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.311169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.311343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.311358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.311365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.311377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.311397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.311403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.311410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.311423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.323943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.324120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.324134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.324142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.324154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.324166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.324172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.324183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.324196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.335962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.336241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.336260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.336268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.336541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.336573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.336581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.336588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.336602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.346029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.346158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.346173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.346180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.346192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.346202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.346208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.346215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.346227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.356483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.356685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.356700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.356708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.356720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.356731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.356737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.356744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.356757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.366594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.366712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.366730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.366738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.366750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.366760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.366766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.366773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.366786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.377109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.377234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.377249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.377256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.377386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.377416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.377424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.377430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.377443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.388265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.388439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.388453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.388460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.388472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.388491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.388498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.388505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.388517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.401290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.401664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.401682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.401690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.402044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.402209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.402219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.402226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.402256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.413128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.413458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.413477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.413485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.413626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.413666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.413674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.413681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.413694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.423417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.423625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.423640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.423648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.423661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.423672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.423678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.423684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.423698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.436337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.436629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.436647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.436655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.437010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.574 [2024-10-01 15:59:00.437168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.574 [2024-10-01 15:59:00.437179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.574 [2024-10-01 15:59:00.437186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.574 [2024-10-01 15:59:00.437229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.574 [2024-10-01 15:59:00.447466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.574 [2024-10-01 15:59:00.447707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.574 [2024-10-01 15:59:00.447723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.574 [2024-10-01 15:59:00.447730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.574 [2024-10-01 15:59:00.447743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.447754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.447760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.447767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.447779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.459039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.459169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.459183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.459191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.459202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.459213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.459220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.459226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.459238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.469764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.469968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.469984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.469991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.470003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.470014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.470021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.470028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.470041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.482063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.482521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.482540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.482551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.482714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.482748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.482756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.482762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.482776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.493817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.494027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.494050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.494058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.494071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.494082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.494088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.494095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.494108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.506809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.507165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.507184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.507192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.507365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.507511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.507522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.507529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.507560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.518056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.518369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.518388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.518395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.518425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.518437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.518446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.518453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.518466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.528124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.528303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.528317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.528325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.529145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.529662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.529674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.529681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.529844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.539737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.539854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.539874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.539882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.539894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.539905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.539911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.539918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.539930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.552148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.552457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.552475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.552483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.552625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.552651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.552659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.552666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.552679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.562854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.562976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.562991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.562999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.563010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.563021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.563027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.563034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.563047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.574437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.574646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.574661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.574668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.574680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.574691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.574697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.574704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.574717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.586085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.586211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.586226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.586233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.586245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.586256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.586262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.586269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.586281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.596536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.596801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.596816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.596824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.596840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.596851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.596857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.596869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.596882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.607515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.607771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.607787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.607795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.607913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.608024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.608033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.608040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.608068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.618629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.618799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.618812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.618819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.618831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.618842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.618848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.618855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.618880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.630122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.630295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.630309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.630316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.630328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.630339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.630345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.630355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.575 [2024-10-01 15:59:00.630367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 [2024-10-01 15:59:00.642622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.575 [2024-10-01 15:59:00.642843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.575 [2024-10-01 15:59:00.642859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.575 [2024-10-01 15:59:00.642872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.575 [2024-10-01 15:59:00.642884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.575 [2024-10-01 15:59:00.642895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.575 [2024-10-01 15:59:00.642902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.575 [2024-10-01 15:59:00.642909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.642922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.653837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.654089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.654106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.654113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.654243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.654272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.654279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.654286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.654412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.664354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.664569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.664583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.664591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.664603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.664614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.664620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.664627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.664640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.677097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.677339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.677357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.677365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.677377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.677388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.677394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.677400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.677413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.687889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.688031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.688045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.688053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.688065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.688075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.688082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.688088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.688101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.699983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.700243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.700258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.700266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.700278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.700289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.700295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.700302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.700315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.712602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.712966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.712984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.712992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.713167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.713202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.713210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.713217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.713345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.723760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.724001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.724018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.724026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.724158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.724189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.724196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.724203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.724217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.734332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.734566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.734583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.734591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.734752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.734903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.734914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.734921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.734952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.745203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.745503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.745521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.745530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.745559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.745571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.745578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.745585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.745718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.756215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.756379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.756394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.756402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.756414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.756426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.756434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.756440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.756454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.767368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.767549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.767564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.767571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.767699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.767729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.767736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.767743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.767756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.778078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.778322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.778337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.778344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.778473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.778503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.778510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.778517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.778644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.789076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.789321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.789336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.789346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.789359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.789369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.789375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.789381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.789394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.799198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.799446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.799461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.799469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.799481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.799492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.799498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.799505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.799517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.810273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.810597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.810614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.810622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.810650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.810663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.810669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.576 [2024-10-01 15:59:00.810675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.576 [2024-10-01 15:59:00.810689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.576 [2024-10-01 15:59:00.822303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.576 [2024-10-01 15:59:00.822663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.576 [2024-10-01 15:59:00.822682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.576 [2024-10-01 15:59:00.822690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.576 [2024-10-01 15:59:00.822873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.576 [2024-10-01 15:59:00.822917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.576 [2024-10-01 15:59:00.822929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.822936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.822950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:00.833181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.833286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.833301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.833308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.833320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.833331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.833338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.833345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.833357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:00.845786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.846009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.846026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.846033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.846047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.846057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.846064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.846070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.846083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:00.856435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.856640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.856656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.856663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.856675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.856686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.856692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.856699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.856712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:00.867628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.867989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.868007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.868015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.868158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.868199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.868207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.868214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.868342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:00.878558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.878795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.878810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.878818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.878831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.878841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.878847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.878853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.878872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:00.890223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.890602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.890620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.890628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.890656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.890667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.890674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.890680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.890694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:00.901974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.902316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.902334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.902342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.902492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.902520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.902527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.902534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.902547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:00.912959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.913071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.913085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.913092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.913104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.913115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.913121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.913127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.913139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:00.924794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.924967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.924982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.924989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.925001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.925011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.925017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.925024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.925037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:00.934859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.935088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.935102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.935110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.935121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.935132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.935138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.935148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.935161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:00.946302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.946461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.946475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.946482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.946494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.946504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.946511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.946518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.946531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:00.956367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.956615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.956630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.956637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.956650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.956661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.956667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.956673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.956687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:00.967036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.967262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.967278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.967286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.968096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.968607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.968620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.968627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.968904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:00.979186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.979581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.979599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.979607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.979751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.979780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.979788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.979795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.979809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 11362.33 IOPS, 44.38 MiB/s [2024-10-01 15:59:00.992547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:00.992795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:00.992812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:00.992820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:00.992832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:00.992844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:00.992850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:00.992858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:00.992879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:01.002613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:01.002904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:01.002920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:01.002928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:01.002941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:01.002951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:01.002958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:01.002964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:01.002977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:01.013968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:01.014136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:01.014150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:01.014158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:01.014170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:01.014185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:01.014191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.577 [2024-10-01 15:59:01.014197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.577 [2024-10-01 15:59:01.014210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.577 [2024-10-01 15:59:01.025738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.577 [2024-10-01 15:59:01.025868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.577 [2024-10-01 15:59:01.025883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.577 [2024-10-01 15:59:01.025890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.577 [2024-10-01 15:59:01.026140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.577 [2024-10-01 15:59:01.026283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.577 [2024-10-01 15:59:01.026293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.026300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.026439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.027678] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x985f50 was disconnected and freed. reset controller. 00:24:57.578 [2024-10-01 15:59:01.027702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.027732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.033535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:57.578 [2024-10-01 15:59:01.033557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.578 [2024-10-01 15:59:01.033572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4422 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.578 [2024-10-01 15:59:01.033579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:24:57.578 [2024-10-01 15:59:01.033586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.578 [2024-10-01 15:59:01.033592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xabf070 00:24:57.578 [2024-10-01 15:59:01.037297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.037815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.037829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.037836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.037913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.037925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.038177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.038194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.578 [2024-10-01 15:59:01.038201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.578 [2024-10-01 15:59:01.038213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.038231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.038237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.038244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.038257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.038266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.038739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.038755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.578 [2024-10-01 15:59:01.038762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.578 [2024-10-01 15:59:01.039117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.039310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.039321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.039328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.039473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.048632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.048653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.048875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.048888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.578 [2024-10-01 15:59:01.048896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.578 [2024-10-01 15:59:01.048985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.048996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.578 [2024-10-01 15:59:01.049002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.578 [2024-10-01 15:59:01.049014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.049023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.049033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.049039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.049045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.049054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.049063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.049069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.049083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.049089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.061318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.061341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.061742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.061761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.578 [2024-10-01 15:59:01.061769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.578 [2024-10-01 15:59:01.061910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.061921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.578 [2024-10-01 15:59:01.061928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.578 [2024-10-01 15:59:01.062568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.062585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.062963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.062975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.062982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.062991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.062997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.063003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.063053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.063061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.072948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.072971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.073265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.073281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.578 [2024-10-01 15:59:01.073289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.578 [2024-10-01 15:59:01.073457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.073468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.578 [2024-10-01 15:59:01.073475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.578 [2024-10-01 15:59:01.073619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.073639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.073777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.073788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.073794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.073804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.073810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.073816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.073846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.073853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.083032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.083062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.083214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.083226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.578 [2024-10-01 15:59:01.083233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.578 [2024-10-01 15:59:01.083716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.083736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.578 [2024-10-01 15:59:01.083745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.578 [2024-10-01 15:59:01.083756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.084028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.084041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.084047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.084054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.084206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.084216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.084222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.084229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.084258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.094637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.094659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.095021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.095040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.578 [2024-10-01 15:59:01.095048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.578 [2024-10-01 15:59:01.095217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.095227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.578 [2024-10-01 15:59:01.095234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.578 [2024-10-01 15:59:01.095378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.095391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.095528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.095537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.095543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.095552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.095558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.095565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.095594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.095601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.105807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.105828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.105945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.105959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.578 [2024-10-01 15:59:01.105966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.578 [2024-10-01 15:59:01.106111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.106120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.578 [2024-10-01 15:59:01.106127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.578 [2024-10-01 15:59:01.106138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.106147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.578 [2024-10-01 15:59:01.106157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.106163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.106169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.106177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.578 [2024-10-01 15:59:01.106183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.578 [2024-10-01 15:59:01.106192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.578 [2024-10-01 15:59:01.106205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.106212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.578 [2024-10-01 15:59:01.118410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.118433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.578 [2024-10-01 15:59:01.118734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.578 [2024-10-01 15:59:01.118751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.578 [2024-10-01 15:59:01.118759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.118888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.118899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.579 [2024-10-01 15:59:01.118906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.119255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.119269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.119426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.119436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.119443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.119452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.119458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.119464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.119607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.119617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.131149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.131171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.131356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.131368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.579 [2024-10-01 15:59:01.131376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.131507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.131517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.579 [2024-10-01 15:59:01.131523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.131535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.131544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.131558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.131564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.131570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.131578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.131584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.131590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.131603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.131610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.141514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.141534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.141762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.141775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.579 [2024-10-01 15:59:01.141782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.141944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.141954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.579 [2024-10-01 15:59:01.141961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.142041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.142051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.144306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.144323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.144330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.144339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.144345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.144351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.144829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.144841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.153974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.153995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.155979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.156000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.579 [2024-10-01 15:59:01.156011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.156236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.156246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.579 [2024-10-01 15:59:01.156252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.157164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.157181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.157714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.157725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.157731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.157740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.157747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.157753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.157810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.157818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.167179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.167200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.167670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.167687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.579 [2024-10-01 15:59:01.167695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.167910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.167921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.579 [2024-10-01 15:59:01.167928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.168386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.168401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.168670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.168680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.168686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.168696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.168702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.168708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.168751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.168760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.178214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.178235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.178474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.178487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.579 [2024-10-01 15:59:01.178494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.178686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.178697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.579 [2024-10-01 15:59:01.178704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.178715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.178724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.178734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.178740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.178747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.178755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.178760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.178767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.178780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.178787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.189397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.189418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.189711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.189727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.579 [2024-10-01 15:59:01.189734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.189926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.189937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.579 [2024-10-01 15:59:01.189944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.190089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.190101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.190238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.190252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.190259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.190268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.190275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.190280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.190310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.190317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.200040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.200061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.200221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.200233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.579 [2024-10-01 15:59:01.200241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.200316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.200325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.579 [2024-10-01 15:59:01.200332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.200344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.200353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.200362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.200369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.200375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.200384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.200390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.200396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.200409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.200415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.212585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.212606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.212792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.212804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.579 [2024-10-01 15:59:01.212812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.213048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.213060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.579 [2024-10-01 15:59:01.213066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.579 [2024-10-01 15:59:01.213078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.213087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.579 [2024-10-01 15:59:01.213112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.213119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.213125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.213135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.579 [2024-10-01 15:59:01.213140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.579 [2024-10-01 15:59:01.213146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.579 [2024-10-01 15:59:01.213160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.213166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.579 [2024-10-01 15:59:01.224513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.224534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.579 [2024-10-01 15:59:01.224879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.579 [2024-10-01 15:59:01.224896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.579 [2024-10-01 15:59:01.224904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.225061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.225075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.580 [2024-10-01 15:59:01.225082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.225227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.225239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.225387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.225398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.225404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.225414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.225420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.225426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.225456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.225467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.235593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.235615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.235905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.235922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.580 [2024-10-01 15:59:01.235929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.236064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.236073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.580 [2024-10-01 15:59:01.236080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.236223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.236236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.236373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.236383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.236390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.236399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.236405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.236411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.236441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.236448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.246647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.246667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.246877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.246890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.580 [2024-10-01 15:59:01.246898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.246983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.246992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.580 [2024-10-01 15:59:01.246999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.247011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.247020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.247030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.247036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.247045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.247055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.247061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.247067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.247081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.247087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.258840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.258861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.259271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.259288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.580 [2024-10-01 15:59:01.259296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.259429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.259439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.580 [2024-10-01 15:59:01.259445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.260036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.260052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.260333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.260344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.260351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.260361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.260366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.260373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.260525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.260534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.269041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.269062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.269231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.269243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.580 [2024-10-01 15:59:01.269250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.269443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.269452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.580 [2024-10-01 15:59:01.269463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.269714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.269727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.270402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.270415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.270422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.270431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.270437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.270444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.270962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.270979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.279679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.279700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.279849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.279867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.580 [2024-10-01 15:59:01.279875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.279946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.279955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.580 [2024-10-01 15:59:01.279962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.280082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.280095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.280186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.280196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.280202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.280212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.280217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.280224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.280251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.280258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.290582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.290607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.290914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.290930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.580 [2024-10-01 15:59:01.290938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.291081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.291091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.580 [2024-10-01 15:59:01.291098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.291242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.291254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.291392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.291402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.291408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.291418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.291424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.291430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.291456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.291463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.302146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.302167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.302380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.302393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.580 [2024-10-01 15:59:01.302400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.302490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.302499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.580 [2024-10-01 15:59:01.302506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.302636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.302647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.302785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.302795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.302801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.302814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.302820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.302826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.302856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.302870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.314388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.314409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.314783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.314799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.580 [2024-10-01 15:59:01.314806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.314913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.314924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.580 [2024-10-01 15:59:01.314931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.315091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.315104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.580 [2024-10-01 15:59:01.315130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.315138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.315144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.315153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.580 [2024-10-01 15:59:01.315159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.580 [2024-10-01 15:59:01.315165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.580 [2024-10-01 15:59:01.315180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.315186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.580 [2024-10-01 15:59:01.324695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.324717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.580 [2024-10-01 15:59:01.324902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.580 [2024-10-01 15:59:01.324916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.580 [2024-10-01 15:59:01.324924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.580 [2024-10-01 15:59:01.325070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.325080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.581 [2024-10-01 15:59:01.325087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.325102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.325112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.325122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.325128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.325134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.325142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.325148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.325154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.325167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.325174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.336714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.336736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.336949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.336962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.581 [2024-10-01 15:59:01.336970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.337114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.337124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.581 [2024-10-01 15:59:01.337130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.337927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.337942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.338418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.338430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.338436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.338446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.338452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.338458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.338757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.338768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.348621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.348643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.348949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.348966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.581 [2024-10-01 15:59:01.348974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.349166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.349177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.581 [2024-10-01 15:59:01.349184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.349327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.349340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.349477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.349487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.349494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.349504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.349510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.349516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.349545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.349553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.360354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.360375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.360521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.360533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.581 [2024-10-01 15:59:01.360540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.360667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.360677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.581 [2024-10-01 15:59:01.360684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.360696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.360705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.360714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.360720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.360727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.360735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.360747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.360753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.360766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.360773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.372358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.372381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.372499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.372511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.581 [2024-10-01 15:59:01.372518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.372671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.372681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.581 [2024-10-01 15:59:01.372688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.372700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.372709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.372719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.372725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.372731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.372740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.372746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.372752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.373545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.373560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.384640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.384662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.384937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.384951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.581 [2024-10-01 15:59:01.384959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.385052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.385061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.581 [2024-10-01 15:59:01.385068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.385455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.385473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.385519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.385527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.385533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.385542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.385548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.385554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.385568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.385574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.394722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.394752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.394853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.394871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.581 [2024-10-01 15:59:01.394878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.394961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.394971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.581 [2024-10-01 15:59:01.394978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.394986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.394997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.395005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.395011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.395017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.395030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.395037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.395042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.395048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.395060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.406976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.406996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.407219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.407237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.581 [2024-10-01 15:59:01.407245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.407454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.407465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.581 [2024-10-01 15:59:01.407472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.407614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.407627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.407652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.407660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.407666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.407675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.581 [2024-10-01 15:59:01.407680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.581 [2024-10-01 15:59:01.407686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.581 [2024-10-01 15:59:01.407700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.407707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.581 [2024-10-01 15:59:01.419029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.419050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.581 [2024-10-01 15:59:01.419348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.419365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.581 [2024-10-01 15:59:01.419372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.419537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.581 [2024-10-01 15:59:01.419547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.581 [2024-10-01 15:59:01.419554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.581 [2024-10-01 15:59:01.419730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.581 [2024-10-01 15:59:01.419744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.419770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.419778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.419784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.419793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.419799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.419809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.419823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.419830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.429299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.429319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.429478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.429490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.582 [2024-10-01 15:59:01.429498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.429642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.429652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.582 [2024-10-01 15:59:01.429659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.429670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.429679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.429689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.429696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.429702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.429711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.429716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.429722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.429735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.429742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.442305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.442327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.442776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.442794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.582 [2024-10-01 15:59:01.442801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.442995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.443006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.582 [2024-10-01 15:59:01.443013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.443111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.443121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.443944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.443958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.443965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.443975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.443981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.443987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.444415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.444427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.452387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.453212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.453452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.453467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.582 [2024-10-01 15:59:01.453475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.454075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.454093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.582 [2024-10-01 15:59:01.454100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.454110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.454381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.454393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.454399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.454406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.454447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.454454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.454460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.454466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.454478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.463708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.464799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.464819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.582 [2024-10-01 15:59:01.464827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.465255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.465275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.465575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.465590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.582 [2024-10-01 15:59:01.465597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.465605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.465610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.465617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.465760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.465772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.465797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.465805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.465811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.465823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.475886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.475907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.476119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.476131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.582 [2024-10-01 15:59:01.476139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.476331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.476342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.582 [2024-10-01 15:59:01.476349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.476361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.476370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.476388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.476395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.476402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.476410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.476417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.476423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.476440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.476446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.488752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.488774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.489245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.489262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.582 [2024-10-01 15:59:01.489270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.489463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.489474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.582 [2024-10-01 15:59:01.489481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.489719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.489734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.489870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.489881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.489888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.489897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.489904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.489910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.489939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.489947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.499552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.499573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.499812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.499825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.582 [2024-10-01 15:59:01.499833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.500025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.500037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.582 [2024-10-01 15:59:01.500044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.500490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.500504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.500673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.500687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.500694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.500703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.500709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.500715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.500895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.500905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.511584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.511606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.511955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.511972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.582 [2024-10-01 15:59:01.511980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.512108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.512117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.582 [2024-10-01 15:59:01.512124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.512310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.512323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.512471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.512481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.512487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.512497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.582 [2024-10-01 15:59:01.512503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.582 [2024-10-01 15:59:01.512509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.582 [2024-10-01 15:59:01.512539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.512547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.582 [2024-10-01 15:59:01.523133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.523155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.582 [2024-10-01 15:59:01.523558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.523574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.582 [2024-10-01 15:59:01.523582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.523803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.582 [2024-10-01 15:59:01.523814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.582 [2024-10-01 15:59:01.523821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.582 [2024-10-01 15:59:01.524081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.582 [2024-10-01 15:59:01.524094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.524242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.524251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.524258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.524267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.524273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.524280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.524309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.524317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.534622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.534643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.535031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.535048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.583 [2024-10-01 15:59:01.535056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.535299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.535309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.583 [2024-10-01 15:59:01.535316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.535579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.535593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.535741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.535751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.535758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.535768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.535774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.535780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.535810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.535821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.545925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.545946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.546114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.546126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.583 [2024-10-01 15:59:01.546134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.546275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.546284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.583 [2024-10-01 15:59:01.546292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.546303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.546313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.546322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.546328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.546335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.546343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.546349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.546355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.546369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.546375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.557666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.557688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.557788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.557801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.583 [2024-10-01 15:59:01.557808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.558001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.558012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.583 [2024-10-01 15:59:01.558019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.558031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.558040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.558050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.558056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.558066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.558075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.558080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.558086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.558100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.558106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.568795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.568817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.569288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.569306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.583 [2024-10-01 15:59:01.569313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.569460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.569470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.583 [2024-10-01 15:59:01.569476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.569734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.569747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.569784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.569791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.569798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.569807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.569813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.569820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.569953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.569963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.579754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.579775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.579972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.579987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.583 [2024-10-01 15:59:01.579994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.580152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.580165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.583 [2024-10-01 15:59:01.580173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.580304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.580315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.580454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.580465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.580472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.580481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.580487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.580494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.580523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.580531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.591609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.591629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.591784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.591797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.583 [2024-10-01 15:59:01.591804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.591898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.591908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.583 [2024-10-01 15:59:01.591915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.591927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.591935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.591945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.591951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.591958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.591966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.591972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.591978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.591991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.591998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.603091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.603116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.603460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.603477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.583 [2024-10-01 15:59:01.603484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.603681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.603692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.583 [2024-10-01 15:59:01.603699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.603903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.603918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.603944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.603952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.603958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.603967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.603973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.603979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.583 [2024-10-01 15:59:01.604107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.604116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.583 [2024-10-01 15:59:01.614910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.614931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.583 [2024-10-01 15:59:01.615236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.615252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.583 [2024-10-01 15:59:01.615260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.615337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.583 [2024-10-01 15:59:01.615347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.583 [2024-10-01 15:59:01.615354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.583 [2024-10-01 15:59:01.615529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.615541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.583 [2024-10-01 15:59:01.615682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.583 [2024-10-01 15:59:01.615692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.583 [2024-10-01 15:59:01.615698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.615711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.615717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.615723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.615754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.615761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.625494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.625515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.625725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.625738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.584 [2024-10-01 15:59:01.625745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.625962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.625974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.584 [2024-10-01 15:59:01.625981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.625993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.626002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.626021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.626028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.626034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.626043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.626049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.626055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.626068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.626075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.636687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.636709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.636869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.636882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.584 [2024-10-01 15:59:01.636890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.637083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.637092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.584 [2024-10-01 15:59:01.637103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.637114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.637123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.637133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.637139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.637145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.637154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.637160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.637166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.637179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.637186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.647114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.647134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.647293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.647305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.584 [2024-10-01 15:59:01.647313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.647483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.647493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.584 [2024-10-01 15:59:01.647500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.647511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.647521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.647530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.647536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.647543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.647551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.647557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.647563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.647576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.647583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.658307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.658329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.658501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.658515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.584 [2024-10-01 15:59:01.658523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.658720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.658730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.584 [2024-10-01 15:59:01.658738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.658750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.658759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.658768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.658774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.658780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.658791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.658797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.658804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.658817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.658824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.668729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.668749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.669135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.669152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.584 [2024-10-01 15:59:01.669160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.669299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.669309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.584 [2024-10-01 15:59:01.669315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.669470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.669482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.669826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.669837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.669844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.669853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.669868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.669874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.670030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.670040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.681021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.681043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.681371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.681388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.584 [2024-10-01 15:59:01.681395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.681476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.681485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.584 [2024-10-01 15:59:01.681492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.681635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.681647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.681784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.681795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.681803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.681813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.681820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.681827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.681857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.681872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.691609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.691630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.691795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.691808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.584 [2024-10-01 15:59:01.691816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.691963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.691973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.584 [2024-10-01 15:59:01.691980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.691994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.692003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.692013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.692019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.692025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.692033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.692039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.692045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.692057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.692064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.702853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.702883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.703043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.703055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.584 [2024-10-01 15:59:01.703063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.703259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.703268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.584 [2024-10-01 15:59:01.703275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.703286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.703296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.703306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.703312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.703319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.703327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.584 [2024-10-01 15:59:01.703333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.584 [2024-10-01 15:59:01.703339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.584 [2024-10-01 15:59:01.703353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.703360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.584 [2024-10-01 15:59:01.714307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.714330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.584 [2024-10-01 15:59:01.714617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.714637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.584 [2024-10-01 15:59:01.714645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.714820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.584 [2024-10-01 15:59:01.714830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.584 [2024-10-01 15:59:01.714837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.584 [2024-10-01 15:59:01.714871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.714882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.584 [2024-10-01 15:59:01.714891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.714897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.714904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.714913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.714918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.714924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.715108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.715118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.725181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.725202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.725642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.725659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.585 [2024-10-01 15:59:01.725666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.725751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.725761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.585 [2024-10-01 15:59:01.725768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.725931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.725944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.726083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.726092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.726099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.726108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.726114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.726124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.726154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.726162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.736105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.736126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.736244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.736257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.585 [2024-10-01 15:59:01.736264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.736414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.736423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.585 [2024-10-01 15:59:01.736430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.736837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.736851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.737114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.737124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.737130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.737140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.737146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.737152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.737541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.737553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.748144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.748166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.748429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.748445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.585 [2024-10-01 15:59:01.748453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.748541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.748551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.585 [2024-10-01 15:59:01.748557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.748760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.748776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.748926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.748937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.748943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.748952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.748958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.748964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.748995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.749003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.759246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.759267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.759398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.759411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.585 [2024-10-01 15:59:01.759418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.759560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.759570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.585 [2024-10-01 15:59:01.759577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.759706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.759717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.759855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.759871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.759878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.759887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.759893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.759899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.759929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.759937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.770161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.770183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.770438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.770454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.585 [2024-10-01 15:59:01.770464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.770598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.770608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.585 [2024-10-01 15:59:01.770615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.770757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.770769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.770915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.770924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.770931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.770940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.770945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.770951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.770981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.770989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.781651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.781673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.781789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.781801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.585 [2024-10-01 15:59:01.781808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.782024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.782036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.585 [2024-10-01 15:59:01.782043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.782055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.782065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.782074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.782081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.782087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.782096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.782102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.782108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.782125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.782132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.794311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.794333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.794599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.794615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.585 [2024-10-01 15:59:01.794623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.794770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.794780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.585 [2024-10-01 15:59:01.794787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.794936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.794949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.795096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.795106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.795112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.795121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.795128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.795134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.795163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.795170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.806470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.806491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.806604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.806616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.585 [2024-10-01 15:59:01.806623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.806766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.806776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.585 [2024-10-01 15:59:01.806782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.806794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.806803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.806816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.806822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.806829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.806837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.585 [2024-10-01 15:59:01.806842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.585 [2024-10-01 15:59:01.806848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.585 [2024-10-01 15:59:01.806868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.806874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.585 [2024-10-01 15:59:01.818295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.818317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.585 [2024-10-01 15:59:01.818781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.818800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.585 [2024-10-01 15:59:01.818809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.818954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.585 [2024-10-01 15:59:01.818965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.585 [2024-10-01 15:59:01.818972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.585 [2024-10-01 15:59:01.819327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.819342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.585 [2024-10-01 15:59:01.819495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.819506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.819512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.819521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.819528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.819534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.819689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.819699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.829823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.829845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.830100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.830117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.586 [2024-10-01 15:59:01.830125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.830326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.830337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.586 [2024-10-01 15:59:01.830344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.830578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.586 [2024-10-01 15:59:01.830592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.586 [2024-10-01 15:59:01.830737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.830747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.830754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.830763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.830769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.830775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.830806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.830814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.841029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.841051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.841319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.841336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.586 [2024-10-01 15:59:01.841343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.841518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.841529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.586 [2024-10-01 15:59:01.841536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.841681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.586 [2024-10-01 15:59:01.841693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.586 [2024-10-01 15:59:01.841719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.841727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.841733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.841742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.841748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.841755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.841768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.841778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.851955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.851977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.852087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.852100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.586 [2024-10-01 15:59:01.852107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.852246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.852256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.586 [2024-10-01 15:59:01.852263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.852274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.586 [2024-10-01 15:59:01.852284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.586 [2024-10-01 15:59:01.852294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.852301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.852308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.852317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.852322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.852329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.852342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.852349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.863665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.863687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.863936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.863952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.586 [2024-10-01 15:59:01.863960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.864029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.864039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.586 [2024-10-01 15:59:01.864046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.864199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.586 [2024-10-01 15:59:01.864212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.586 [2024-10-01 15:59:01.864238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.864245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.864256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.864266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.864272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.864278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.864292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.864299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.874145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.874165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.874269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.874282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.586 [2024-10-01 15:59:01.874289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.874431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.874441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.586 [2024-10-01 15:59:01.874447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.874459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.586 [2024-10-01 15:59:01.874468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.586 [2024-10-01 15:59:01.874478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.874484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.874491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.874499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.874505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.874511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.874525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.874532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.886339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.886362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.886526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.886539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.586 [2024-10-01 15:59:01.886546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.886635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.886648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.586 [2024-10-01 15:59:01.886655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.886667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.586 [2024-10-01 15:59:01.886676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.586 [2024-10-01 15:59:01.886694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.886701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.886707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.886715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.886721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.886727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.886740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.886747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.898195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.898216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.898587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.898603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.586 [2024-10-01 15:59:01.898611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.898814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.898825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.586 [2024-10-01 15:59:01.898832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.899035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.586 [2024-10-01 15:59:01.899050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.586 [2024-10-01 15:59:01.899077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.899084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.899091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.899100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.586 [2024-10-01 15:59:01.899105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.586 [2024-10-01 15:59:01.899111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.586 [2024-10-01 15:59:01.899124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.899131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.586 [2024-10-01 15:59:01.908627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.908648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.586 [2024-10-01 15:59:01.908820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.908833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.586 [2024-10-01 15:59:01.908840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.908945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.586 [2024-10-01 15:59:01.908955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.586 [2024-10-01 15:59:01.908962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.586 [2024-10-01 15:59:01.908973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.908983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.908993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.908999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.909005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.909014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.909020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.909026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.909040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.909046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.921158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:01.921179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:01.921278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:01.921290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.587 [2024-10-01 15:59:01.921298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:01.921368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:01.921377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.587 [2024-10-01 15:59:01.921383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:01.921660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.921673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.921821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.921830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.921840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.921849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.921855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.921861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.921899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.921906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.932644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:01.932666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:01.933015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:01.933032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.587 [2024-10-01 15:59:01.933040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:01.933186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:01.933196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.587 [2024-10-01 15:59:01.933202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:01.933350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.933366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.933515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.933526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.933533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.933542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.933548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.933554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.933696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.933706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.943914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:01.943935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:01.944147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:01.944159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.587 [2024-10-01 15:59:01.944166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:01.944256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:01.944265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.587 [2024-10-01 15:59:01.944283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:01.944294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.944303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.944313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.944319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.944325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.944334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.944339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.944346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.944359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.944366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.955402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:01.955423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:01.955716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:01.955732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.587 [2024-10-01 15:59:01.955739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:01.955909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:01.955919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.587 [2024-10-01 15:59:01.955926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:01.956127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.956141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.956167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.956175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.956181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.956190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.956196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.956201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.956215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.956222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.966635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:01.966656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:01.966762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:01.966775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.587 [2024-10-01 15:59:01.966782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:01.966930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:01.966940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.587 [2024-10-01 15:59:01.966947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:01.967284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.967298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.967455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.967466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.967472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.967481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.967487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.967493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.967665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.967674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.978408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:01.978431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:01.978711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:01.978727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.587 [2024-10-01 15:59:01.978735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:01.978888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:01.978899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.587 [2024-10-01 15:59:01.978906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:01.979156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.979169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.979317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.979327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.979334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.979343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.979353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.979359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.979389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.979397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.990404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:01.990425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:01.990525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:01.990537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.587 [2024-10-01 15:59:01.990545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:01.990692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:01.990701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.587 [2024-10-01 15:59:01.990708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:01.990719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.990728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:01.990738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.990744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.990750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.990759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:01.990765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:01.990771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:01.990784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:01.990791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 11356.90 IOPS, 44.36 MiB/s [2024-10-01 15:59:02.001981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:02.002003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:02.002247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:02.002269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.587 [2024-10-01 15:59:02.002276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:02.002417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:02.002426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.587 [2024-10-01 15:59:02.002433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:02.002788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:02.002803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.587 [2024-10-01 15:59:02.002961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:02.002972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:02.002978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:02.002988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.587 [2024-10-01 15:59:02.002994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.587 [2024-10-01 15:59:02.003000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.587 [2024-10-01 15:59:02.003141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:02.003151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.587 [2024-10-01 15:59:02.012199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:02.012219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.587 [2024-10-01 15:59:02.012384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:02.012396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.587 [2024-10-01 15:59:02.012404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.587 [2024-10-01 15:59:02.012551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.587 [2024-10-01 15:59:02.012560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.587 [2024-10-01 15:59:02.012567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.012578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.012587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.012597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.012603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.012610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.012618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.012623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.012629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.012642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.012649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.024559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.024580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.024695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.024707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.588 [2024-10-01 15:59:02.024715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.024800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.024809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.588 [2024-10-01 15:59:02.024816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.024828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.024837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.024846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.024853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.024859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.024873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.024879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.024885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.024898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.024905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.035218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.035239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.035360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.035372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.588 [2024-10-01 15:59:02.035379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.035535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.035545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.588 [2024-10-01 15:59:02.035551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.035563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.035573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.035583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.035589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.035595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.035603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.035609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.035619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.035633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.035639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.047631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.047653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.047923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.047940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.588 [2024-10-01 15:59:02.047948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.048091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.048100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.588 [2024-10-01 15:59:02.048107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.048263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.048276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.048414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.048425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.048431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.048440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.048446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.048452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.048482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.048489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.058972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.058994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.059331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.059348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.588 [2024-10-01 15:59:02.059356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.059506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.059516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.588 [2024-10-01 15:59:02.059523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.059776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.059793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.059948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.059959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.059966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.059975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.059981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.059987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.060016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.060024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.070339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.070361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.070588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.070603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.588 [2024-10-01 15:59:02.070611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.070737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.070747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.588 [2024-10-01 15:59:02.070754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.070934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.070947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.070986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.070994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.071000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.071010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.071016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.071022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.071037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.071043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.081294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.081315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.081432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.081444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.588 [2024-10-01 15:59:02.081455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.081602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.081612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.588 [2024-10-01 15:59:02.081619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.081631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.081639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.081649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.081655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.081662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.081670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.081676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.081682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.082136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.082147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.093592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.093613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.093960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.093977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.588 [2024-10-01 15:59:02.093985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.094124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.094133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.588 [2024-10-01 15:59:02.094140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.094288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.094300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.094437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.094447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.094454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.094463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.094469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.094479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.094509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.094516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.104328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.104349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.104586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.104600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.588 [2024-10-01 15:59:02.104607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.104820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.104831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.588 [2024-10-01 15:59:02.104838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.105288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.105303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.588 [2024-10-01 15:59:02.105500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.105511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.105518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.105527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.588 [2024-10-01 15:59:02.105533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.588 [2024-10-01 15:59:02.105540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.588 [2024-10-01 15:59:02.105683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.105693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.588 [2024-10-01 15:59:02.115662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.115684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.588 [2024-10-01 15:59:02.115771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.115784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.588 [2024-10-01 15:59:02.115791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.588 [2024-10-01 15:59:02.115939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.588 [2024-10-01 15:59:02.115949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.589 [2024-10-01 15:59:02.115956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.115968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.115977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.115990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.115996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.116002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.116011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.116017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.116023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.116471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.116481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.126977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.126997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.127231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.127243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.589 [2024-10-01 15:59:02.127251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.127474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.127485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.589 [2024-10-01 15:59:02.127491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.127942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.127957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.128125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.128135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.128141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.128150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.128157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.128163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.128336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.128346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.137611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.137632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.137811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.137824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.589 [2024-10-01 15:59:02.137832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.138047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.138058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.589 [2024-10-01 15:59:02.138065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.138076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.138085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.138095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.138101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.138107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.138116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.138122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.138128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.138141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.138148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.150242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.150264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.150685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.150703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.589 [2024-10-01 15:59:02.150710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.150925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.150937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.589 [2024-10-01 15:59:02.150944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.151681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.151698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.151987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.151998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.152005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.152014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.152020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.152027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.152069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.152080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.160352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.160373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.160556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.160569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.589 [2024-10-01 15:59:02.160576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.160811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.160822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.589 [2024-10-01 15:59:02.160828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.161172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.161188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.161227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.161234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.161241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.161249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.161256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.161262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.161360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.161369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.171261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.171281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.171558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.171572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.589 [2024-10-01 15:59:02.171579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.171651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.171660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.589 [2024-10-01 15:59:02.171667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.172409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.172427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.172946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.172963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.172970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.172979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.172985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.172991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.173173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.173183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.182364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.182385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.182617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.182630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.589 [2024-10-01 15:59:02.182638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.182880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.182892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.589 [2024-10-01 15:59:02.182899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.183199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.183213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.183459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.183469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.183476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.183485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.183491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.183497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.183536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.183543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.192442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.192471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.192721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.192734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.589 [2024-10-01 15:59:02.192742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.192951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.192966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.589 [2024-10-01 15:59:02.192973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.589 [2024-10-01 15:59:02.192982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.192993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.589 [2024-10-01 15:59:02.193001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.193006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.193012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.193025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.193032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.589 [2024-10-01 15:59:02.193038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.589 [2024-10-01 15:59:02.193043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.589 [2024-10-01 15:59:02.193055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.589 [2024-10-01 15:59:02.203660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.203680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.589 [2024-10-01 15:59:02.203858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.589 [2024-10-01 15:59:02.203875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.590 [2024-10-01 15:59:02.203882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.204074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.204084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.590 [2024-10-01 15:59:02.204091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.204103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.204112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.204122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.204128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.204135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.204143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.204149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.204155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.204168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.204175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.215598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.215619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.216031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.216049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.590 [2024-10-01 15:59:02.216056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.216219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.216228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.590 [2024-10-01 15:59:02.216235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.216274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.216285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.216295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.216301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.216308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.216317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.216322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.216329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.216342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.216349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.226875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.226896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.227244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.227260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.590 [2024-10-01 15:59:02.227268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.227410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.227419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.590 [2024-10-01 15:59:02.227426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.227456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.227467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.227477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.227483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.227495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.227504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.227510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.227517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.227530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.227536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.236956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.236985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.237152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.237164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.590 [2024-10-01 15:59:02.237172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.237390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.237400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.590 [2024-10-01 15:59:02.237407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.237416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.237427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.237435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.237440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.237447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.237460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.237467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.237472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.237478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.237490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.249207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.249228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.249611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.249627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.590 [2024-10-01 15:59:02.249634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.249727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.249736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.590 [2024-10-01 15:59:02.249746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.250004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.250018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.250054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.250062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.250068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.250078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.250084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.250090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.250219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.250228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.260277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.260298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.260549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.260563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.590 [2024-10-01 15:59:02.260571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.260779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.260790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.590 [2024-10-01 15:59:02.260796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.260808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.260817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.260827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.260834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.260840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.260848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.260854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.260861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.260881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.260887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.271455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.271480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.271776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.271793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.590 [2024-10-01 15:59:02.271801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.271957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.271968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.590 [2024-10-01 15:59:02.271975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.272004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.272014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.272024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.272030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.272037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.272047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.272053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.272059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.272073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.272079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.281915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.281935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.282089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.282102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.590 [2024-10-01 15:59:02.282109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.282326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.282335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.590 [2024-10-01 15:59:02.282342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.282354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.282363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.282372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.282379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.282385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.282396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.282402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.282408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.282421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.282428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.294334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.294355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.294759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.294775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.590 [2024-10-01 15:59:02.294783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.294867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.294877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.590 [2024-10-01 15:59:02.294884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.590 [2024-10-01 15:59:02.295032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.295044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.590 [2024-10-01 15:59:02.295183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.295192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.295199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.295208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.590 [2024-10-01 15:59:02.295215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.590 [2024-10-01 15:59:02.295221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.590 [2024-10-01 15:59:02.295250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.295258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.590 [2024-10-01 15:59:02.304440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.304461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.590 [2024-10-01 15:59:02.304625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.590 [2024-10-01 15:59:02.304638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.591 [2024-10-01 15:59:02.304646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.591 [2024-10-01 15:59:02.304894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.591 [2024-10-01 15:59:02.304905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.591 [2024-10-01 15:59:02.304912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.591 [2024-10-01 15:59:02.305402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.591 [2024-10-01 15:59:02.305416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.591 [2024-10-01 15:59:02.305882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.591 [2024-10-01 15:59:02.305893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.591 [2024-10-01 15:59:02.305900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.591 [2024-10-01 15:59:02.305909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.591 [2024-10-01 15:59:02.305915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.591 [2024-10-01 15:59:02.305922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.591 [2024-10-01 15:59:02.306300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.591 [2024-10-01 15:59:02.306310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.591 [2024-10-01 15:59:02.316511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.591 [2024-10-01 15:59:02.316532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.591 [2024-10-01 15:59:02.316884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.591 [2024-10-01 15:59:02.316901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.591 [2024-10-01 15:59:02.316908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.591 [2024-10-01 15:59:02.317103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.591 [2024-10-01 15:59:02.317114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.591 [2024-10-01 15:59:02.317121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.591 [2024-10-01 15:59:02.317413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.591 [2024-10-01 15:59:02.317428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.591 [2024-10-01 15:59:02.317467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.591 [2024-10-01 15:59:02.317474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.591 [2024-10-01 15:59:02.317481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.591 [2024-10-01 15:59:02.317489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.591 [2024-10-01 15:59:02.317495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.591 [2024-10-01 15:59:02.317501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.591 [2024-10-01 15:59:02.317630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.591 [2024-10-01 15:59:02.317639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.591 [2024-10-01 15:59:02.327992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.591 [2024-10-01 15:59:02.328013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.591 [2024-10-01 15:59:02.328342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.591 [2024-10-01 15:59:02.328358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.591 [2024-10-01 15:59:02.328366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.591 [2024-10-01 15:59:02.328512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.591 [2024-10-01 15:59:02.328522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.591 [2024-10-01 15:59:02.328529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.591 [2024-10-01 15:59:02.328675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.591 [2024-10-01 15:59:02.328687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.591 [2024-10-01 15:59:02.328825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.591 [2024-10-01 15:59:02.328836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.591 [2024-10-01 15:59:02.328843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.591 [2024-10-01 15:59:02.328852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.591 [2024-10-01 15:59:02.328858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.591 [2024-10-01 15:59:02.328870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.591 [2024-10-01 15:59:02.328901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.591 [2024-10-01 15:59:02.328908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.591 [2024-10-01 15:59:02.339470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.591 [2024-10-01 15:59:02.339492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.591 [2024-10-01 15:59:02.339788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.591 [2024-10-01 15:59:02.339804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.591 [2024-10-01 15:59:02.339811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.591 [2024-10-01 15:59:02.340078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.591 [2024-10-01 15:59:02.340090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.591 [2024-10-01 15:59:02.340097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.591 [2024-10-01 15:59:02.340284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.591 [2024-10-01 15:59:02.340299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.591 [2024-10-01 15:59:02.340440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.591 [2024-10-01 15:59:02.340450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.591 [2024-10-01 15:59:02.340457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.591 [2024-10-01 15:59:02.340466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.591 [2024-10-01 15:59:02.340476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.591 [2024-10-01 15:59:02.340482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.591 [2024-10-01 15:59:02.340625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.591 [2024-10-01 15:59:02.340635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.591 [2024-10-01 15:59:02.350980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.591 [2024-10-01 15:59:02.351002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.591 [2024-10-01 15:59:02.351379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.591 [2024-10-01 15:59:02.351395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.591 [2024-10-01 15:59:02.351402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.591 [2024-10-01 15:59:02.351595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.591 [2024-10-01 15:59:02.351606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.591 [2024-10-01 15:59:02.351612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.591 [2024-10-01 15:59:02.351900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.591 [2024-10-01 15:59:02.351914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.591 [2024-10-01 15:59:02.352066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.591 [2024-10-01 15:59:02.352076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.591 [2024-10-01 15:59:02.352083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.591 [2024-10-01 15:59:02.352092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.591 [2024-10-01 15:59:02.352099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.591 [2024-10-01 15:59:02.352105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.591 [2024-10-01 15:59:02.352135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.591 [2024-10-01 15:59:02.352143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.591 [2024-10-01 15:59:02.362418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.591 [2024-10-01 15:59:02.362440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.591 [2024-10-01 15:59:02.362842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.591 [2024-10-01 15:59:02.362859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.591 [2024-10-01 15:59:02.362871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.591 [2024-10-01 15:59:02.363069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.591 [2024-10-01 15:59:02.363079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.591 [2024-10-01 15:59:02.363086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.591 [2024-10-01 15:59:02.363350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.591 [2024-10-01 15:59:02.363369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.591 [2024-10-01 15:59:02.363518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.591 [2024-10-01 15:59:02.363528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.591 [2024-10-01 15:59:02.363535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.591 [2024-10-01 15:59:02.363544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.591 [2024-10-01 15:59:02.363551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.592 [2024-10-01 15:59:02.363557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.592 [2024-10-01 15:59:02.363586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.592 [2024-10-01 15:59:02.363593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.592 [2024-10-01 15:59:02.373914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.592 [2024-10-01 15:59:02.373935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.592 [2024-10-01 15:59:02.374285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.592 [2024-10-01 15:59:02.374301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.592 [2024-10-01 15:59:02.374309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.592 [2024-10-01 15:59:02.374503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.592 [2024-10-01 15:59:02.374513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.592 [2024-10-01 15:59:02.374520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.592 [2024-10-01 15:59:02.374694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.592 [2024-10-01 15:59:02.374708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.592 [2024-10-01 15:59:02.374850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.592 [2024-10-01 15:59:02.374860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.592 [2024-10-01 15:59:02.374874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.592 [2024-10-01 15:59:02.374884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.592 [2024-10-01 15:59:02.374890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.592 [2024-10-01 15:59:02.374896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.592 [2024-10-01 15:59:02.375039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.592 [2024-10-01 15:59:02.375049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.592 [2024-10-01 15:59:02.385426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.592 [2024-10-01 15:59:02.385448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.592 [2024-10-01 15:59:02.385787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.592 [2024-10-01 15:59:02.385804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.592 [2024-10-01 15:59:02.385816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.592 [2024-10-01 15:59:02.386036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.592 [2024-10-01 15:59:02.386049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.592 [2024-10-01 15:59:02.386057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.592 [2024-10-01 15:59:02.386320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.592 [2024-10-01 15:59:02.386335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.592 [2024-10-01 15:59:02.386484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.592 [2024-10-01 15:59:02.386495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.592 [2024-10-01 15:59:02.386503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.592 [2024-10-01 15:59:02.386512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.592 [2024-10-01 15:59:02.386518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.592 [2024-10-01 15:59:02.386525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.592 [2024-10-01 15:59:02.386555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.592 [2024-10-01 15:59:02.386563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.592 [2024-10-01 15:59:02.396892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.592 [2024-10-01 15:59:02.396917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.592 [2024-10-01 15:59:02.397218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.592 [2024-10-01 15:59:02.397234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.592 [2024-10-01 15:59:02.397242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.592 [2024-10-01 15:59:02.397386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.592 [2024-10-01 15:59:02.397396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.592 [2024-10-01 15:59:02.397402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.592 [2024-10-01 15:59:02.397577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.592 [2024-10-01 15:59:02.397591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.592 [2024-10-01 15:59:02.397731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.592 [2024-10-01 15:59:02.397741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.592 [2024-10-01 15:59:02.397748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.592 [2024-10-01 15:59:02.397757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.592 [2024-10-01 15:59:02.397763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.592 [2024-10-01 15:59:02.397773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.592 [2024-10-01 15:59:02.397804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.592 [2024-10-01 15:59:02.397812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.592 [2024-10-01 15:59:02.408362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.592 [2024-10-01 15:59:02.408385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.592 [2024-10-01 15:59:02.408731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.592 [2024-10-01 15:59:02.408748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.592 [2024-10-01 15:59:02.408755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.592 [2024-10-01 15:59:02.408878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.592 [2024-10-01 15:59:02.408888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.592 [2024-10-01 15:59:02.408895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.592 [2024-10-01 15:59:02.409078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.592 [2024-10-01 15:59:02.409092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.592 [2024-10-01 15:59:02.409233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.592 [2024-10-01 15:59:02.409243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.592 [2024-10-01 15:59:02.409250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.592 [2024-10-01 15:59:02.409259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.592 [2024-10-01 15:59:02.409266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.592 [2024-10-01 15:59:02.409272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.592 [2024-10-01 15:59:02.409414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.592 [2024-10-01 15:59:02.409423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.592 [2024-10-01 15:59:02.419814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.592 [2024-10-01 15:59:02.419835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.592 [2024-10-01 15:59:02.420154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.592 [2024-10-01 15:59:02.420171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.592 [2024-10-01 15:59:02.420179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.592 [2024-10-01 15:59:02.420343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.593 [2024-10-01 15:59:02.420354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.593 [2024-10-01 15:59:02.420360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.593 [2024-10-01 15:59:02.420643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.593 [2024-10-01 15:59:02.420657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.593 [2024-10-01 15:59:02.420812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.593 [2024-10-01 15:59:02.420822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.593 [2024-10-01 15:59:02.420829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.593 [2024-10-01 15:59:02.420838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.593 [2024-10-01 15:59:02.420844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.593 [2024-10-01 15:59:02.420850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.593 [2024-10-01 15:59:02.420887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.593 [2024-10-01 15:59:02.420895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.593 [2024-10-01 15:59:02.431086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.593 [2024-10-01 15:59:02.431107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.593 [2024-10-01 15:59:02.431344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.593 [2024-10-01 15:59:02.431357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.593 [2024-10-01 15:59:02.431365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.593 [2024-10-01 15:59:02.431584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.593 [2024-10-01 15:59:02.431595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.593 [2024-10-01 15:59:02.431602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.593 [2024-10-01 15:59:02.431841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.593 [2024-10-01 15:59:02.431854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.593 [2024-10-01 15:59:02.432008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.593 [2024-10-01 15:59:02.432019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.593 [2024-10-01 15:59:02.432026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.593 [2024-10-01 15:59:02.432035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.593 [2024-10-01 15:59:02.432041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.593 [2024-10-01 15:59:02.432047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.593 [2024-10-01 15:59:02.432075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.593 [2024-10-01 15:59:02.432083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.593 [2024-10-01 15:59:02.442690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.593 [2024-10-01 15:59:02.442711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.593 [2024-10-01 15:59:02.442874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.593 [2024-10-01 15:59:02.442888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.593 [2024-10-01 15:59:02.442895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.593 [2024-10-01 15:59:02.443044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.593 [2024-10-01 15:59:02.443054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.593 [2024-10-01 15:59:02.443061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.593 [2024-10-01 15:59:02.443072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.593 [2024-10-01 15:59:02.443081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.593 [2024-10-01 15:59:02.443091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.593 [2024-10-01 15:59:02.443097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.593 [2024-10-01 15:59:02.443104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.593 [2024-10-01 15:59:02.443112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.593 [2024-10-01 15:59:02.443118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.593 [2024-10-01 15:59:02.443124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.593 [2024-10-01 15:59:02.443138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.593 [2024-10-01 15:59:02.443144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.593 [2024-10-01 15:59:02.454522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.593 [2024-10-01 15:59:02.454543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.593 [2024-10-01 15:59:02.454776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.593 [2024-10-01 15:59:02.454788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.593 [2024-10-01 15:59:02.454796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.593 [2024-10-01 15:59:02.455011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.593 [2024-10-01 15:59:02.455022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.593 [2024-10-01 15:59:02.455029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.593 [2024-10-01 15:59:02.455041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.593 [2024-10-01 15:59:02.455051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.593 [2024-10-01 15:59:02.455061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.593 [2024-10-01 15:59:02.455067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.593 [2024-10-01 15:59:02.455073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.593 [2024-10-01 15:59:02.455082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.593 [2024-10-01 15:59:02.455087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.593 [2024-10-01 15:59:02.455094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.593 [2024-10-01 15:59:02.455109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.593 [2024-10-01 15:59:02.455119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.593 [2024-10-01 15:59:02.466214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.593 [2024-10-01 15:59:02.466236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.593 [2024-10-01 15:59:02.466488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.593 [2024-10-01 15:59:02.466502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.593 [2024-10-01 15:59:02.466509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.593 [2024-10-01 15:59:02.466716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.593 [2024-10-01 15:59:02.466727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.593 [2024-10-01 15:59:02.466734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.593 [2024-10-01 15:59:02.466746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.593 [2024-10-01 15:59:02.466755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.593 [2024-10-01 15:59:02.466765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.593 [2024-10-01 15:59:02.466771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.593 [2024-10-01 15:59:02.466777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.593 [2024-10-01 15:59:02.466785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.593 [2024-10-01 15:59:02.466792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.593 [2024-10-01 15:59:02.466798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.593 [2024-10-01 15:59:02.466811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.593 [2024-10-01 15:59:02.466818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.593 [2024-10-01 15:59:02.477826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.593 [2024-10-01 15:59:02.477847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.593 [2024-10-01 15:59:02.478088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.593 [2024-10-01 15:59:02.478107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.593 [2024-10-01 15:59:02.478114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.593 [2024-10-01 15:59:02.478236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.593 [2024-10-01 15:59:02.478246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.593 [2024-10-01 15:59:02.478252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.593 [2024-10-01 15:59:02.478264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.594 [2024-10-01 15:59:02.478273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.594 [2024-10-01 15:59:02.478283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.594 [2024-10-01 15:59:02.478293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.594 [2024-10-01 15:59:02.478299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.594 [2024-10-01 15:59:02.478308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.594 [2024-10-01 15:59:02.478314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.594 [2024-10-01 15:59:02.478320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.594 [2024-10-01 15:59:02.478333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.594 [2024-10-01 15:59:02.478339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.594 [2024-10-01 15:59:02.489619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.594 [2024-10-01 15:59:02.489640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.594 [2024-10-01 15:59:02.489787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.594 [2024-10-01 15:59:02.489800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.594 [2024-10-01 15:59:02.489807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.594 [2024-10-01 15:59:02.490027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.594 [2024-10-01 15:59:02.490037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.594 [2024-10-01 15:59:02.490044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.594 [2024-10-01 15:59:02.490055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.594 [2024-10-01 15:59:02.490065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.594 [2024-10-01 15:59:02.490083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.594 [2024-10-01 15:59:02.490090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.594 [2024-10-01 15:59:02.490097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.594 [2024-10-01 15:59:02.490105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.594 [2024-10-01 15:59:02.490111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.594 [2024-10-01 15:59:02.490117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.594 [2024-10-01 15:59:02.490131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.594 [2024-10-01 15:59:02.490137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.594 [2024-10-01 15:59:02.502622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.594 [2024-10-01 15:59:02.502644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.594 [2024-10-01 15:59:02.502878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.594 [2024-10-01 15:59:02.502891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.594 [2024-10-01 15:59:02.502898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.594 [2024-10-01 15:59:02.503092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.594 [2024-10-01 15:59:02.503106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.594 [2024-10-01 15:59:02.503113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.594 [2024-10-01 15:59:02.503125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.594 [2024-10-01 15:59:02.503134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.594 [2024-10-01 15:59:02.503152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.594 [2024-10-01 15:59:02.503159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.594 [2024-10-01 15:59:02.503165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.594 [2024-10-01 15:59:02.503174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.594 [2024-10-01 15:59:02.503180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.594 [2024-10-01 15:59:02.503186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.594 [2024-10-01 15:59:02.503199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.594 [2024-10-01 15:59:02.503206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.594 [2024-10-01 15:59:02.514492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.594 [2024-10-01 15:59:02.514514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.594 [2024-10-01 15:59:02.514834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.594 [2024-10-01 15:59:02.514850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.594 [2024-10-01 15:59:02.514857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.594 [2024-10-01 15:59:02.515029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.594 [2024-10-01 15:59:02.515039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.594 [2024-10-01 15:59:02.515046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.594 [2024-10-01 15:59:02.515190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.594 [2024-10-01 15:59:02.515203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.594 [2024-10-01 15:59:02.515341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.594 [2024-10-01 15:59:02.515351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.594 [2024-10-01 15:59:02.515358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.594 [2024-10-01 15:59:02.515367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.594 [2024-10-01 15:59:02.515373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.594 [2024-10-01 15:59:02.515379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.594 [2024-10-01 15:59:02.515523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.594 [2024-10-01 15:59:02.515532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.594 [2024-10-01 15:59:02.526031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.594 [2024-10-01 15:59:02.526051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.594 [2024-10-01 15:59:02.526217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.594 [2024-10-01 15:59:02.526230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.594 [2024-10-01 15:59:02.526237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.594 [2024-10-01 15:59:02.526371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.594 [2024-10-01 15:59:02.526381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.594 [2024-10-01 15:59:02.526387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.594 [2024-10-01 15:59:02.526399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.594 [2024-10-01 15:59:02.526408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.594 [2024-10-01 15:59:02.526418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.594 [2024-10-01 15:59:02.526424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.594 [2024-10-01 15:59:02.526431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.594 [2024-10-01 15:59:02.526440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.594 [2024-10-01 15:59:02.526445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.594 [2024-10-01 15:59:02.526451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.594 [2024-10-01 15:59:02.526464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.594 [2024-10-01 15:59:02.526471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.594 [2024-10-01 15:59:02.538245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.594 [2024-10-01 15:59:02.538266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.594 [2024-10-01 15:59:02.538426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.594 [2024-10-01 15:59:02.538439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.594 [2024-10-01 15:59:02.538446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.594 [2024-10-01 15:59:02.538665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.594 [2024-10-01 15:59:02.538675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.594 [2024-10-01 15:59:02.538682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.594 [2024-10-01 15:59:02.538694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.594 [2024-10-01 15:59:02.538703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.594 [2024-10-01 15:59:02.538712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.594 [2024-10-01 15:59:02.538718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.595 [2024-10-01 15:59:02.538729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.595 [2024-10-01 15:59:02.538737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.595 [2024-10-01 15:59:02.538743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.595 [2024-10-01 15:59:02.538750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.595 [2024-10-01 15:59:02.538763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.595 [2024-10-01 15:59:02.538770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.595 [2024-10-01 15:59:02.550222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.595 [2024-10-01 15:59:02.550243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.595 [2024-10-01 15:59:02.550629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.595 [2024-10-01 15:59:02.550645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.595 [2024-10-01 15:59:02.550653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.595 [2024-10-01 15:59:02.550810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.595 [2024-10-01 15:59:02.550820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.595 [2024-10-01 15:59:02.550827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.595 [2024-10-01 15:59:02.551037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.595 [2024-10-01 15:59:02.551050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.595 [2024-10-01 15:59:02.551195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.595 [2024-10-01 15:59:02.551204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.595 [2024-10-01 15:59:02.551210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.595 [2024-10-01 15:59:02.551219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.595 [2024-10-01 15:59:02.551225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.595 [2024-10-01 15:59:02.551232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.595 [2024-10-01 15:59:02.551263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.595 [2024-10-01 15:59:02.551270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.595 [2024-10-01 15:59:02.561733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.595 [2024-10-01 15:59:02.561754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.595 [2024-10-01 15:59:02.561998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.595 [2024-10-01 15:59:02.562011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.595 [2024-10-01 15:59:02.562018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.595 [2024-10-01 15:59:02.562231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.595 [2024-10-01 15:59:02.562241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.595 [2024-10-01 15:59:02.562251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.595 [2024-10-01 15:59:02.562263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.595 [2024-10-01 15:59:02.562272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.595 [2024-10-01 15:59:02.562281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.595 [2024-10-01 15:59:02.562287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.595 [2024-10-01 15:59:02.562293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.595 [2024-10-01 15:59:02.562302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.595 [2024-10-01 15:59:02.562308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.595 [2024-10-01 15:59:02.562313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.595 [2024-10-01 15:59:02.562327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.595 [2024-10-01 15:59:02.562334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.595 [2024-10-01 15:59:02.573260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.595 [2024-10-01 15:59:02.573281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.595 [2024-10-01 15:59:02.573519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.595 [2024-10-01 15:59:02.573531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.595 [2024-10-01 15:59:02.573539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.595 [2024-10-01 15:59:02.573776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.595 [2024-10-01 15:59:02.573787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.595 [2024-10-01 15:59:02.573793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.595 [2024-10-01 15:59:02.574421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.595 [2024-10-01 15:59:02.574437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.595 [2024-10-01 15:59:02.574580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.595 [2024-10-01 15:59:02.574589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.595 [2024-10-01 15:59:02.574596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.595 [2024-10-01 15:59:02.574605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.595 [2024-10-01 15:59:02.574612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.595 [2024-10-01 15:59:02.574618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.595 [2024-10-01 15:59:02.575549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.595 [2024-10-01 15:59:02.575564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.595 [2024-10-01 15:59:02.583762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.595 [2024-10-01 15:59:02.583786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.595 [2024-10-01 15:59:02.584017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.595 [2024-10-01 15:59:02.584031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.595 [2024-10-01 15:59:02.584039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.595 [2024-10-01 15:59:02.584253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.595 [2024-10-01 15:59:02.584264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.595 [2024-10-01 15:59:02.584270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.595 [2024-10-01 15:59:02.584283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.595 [2024-10-01 15:59:02.584292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.595 [2024-10-01 15:59:02.584302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.595 [2024-10-01 15:59:02.584308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.595 [2024-10-01 15:59:02.584315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.595 [2024-10-01 15:59:02.584323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.595 [2024-10-01 15:59:02.584330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.595 [2024-10-01 15:59:02.584338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.595 [2024-10-01 15:59:02.584351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.595 [2024-10-01 15:59:02.584358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.595 [2024-10-01 15:59:02.594938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.595 [2024-10-01 15:59:02.594960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.595 [2024-10-01 15:59:02.595261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.595 [2024-10-01 15:59:02.595277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.595 [2024-10-01 15:59:02.595284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.595 [2024-10-01 15:59:02.595483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.595 [2024-10-01 15:59:02.595494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.595 [2024-10-01 15:59:02.595501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.595 [2024-10-01 15:59:02.595644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.595 [2024-10-01 15:59:02.595656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.595 [2024-10-01 15:59:02.595793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.595 [2024-10-01 15:59:02.595802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.595 [2024-10-01 15:59:02.595809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.595 [2024-10-01 15:59:02.595822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.595 [2024-10-01 15:59:02.595828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.596 [2024-10-01 15:59:02.595834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.596 [2024-10-01 15:59:02.595904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.596 [2024-10-01 15:59:02.595913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.596 [2024-10-01 15:59:02.605456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.596 [2024-10-01 15:59:02.605476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.596 [2024-10-01 15:59:02.605710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.596 [2024-10-01 15:59:02.605723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.596 [2024-10-01 15:59:02.605730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.596 [2024-10-01 15:59:02.605921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.596 [2024-10-01 15:59:02.605933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.596 [2024-10-01 15:59:02.605940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.596 [2024-10-01 15:59:02.605951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.596 [2024-10-01 15:59:02.605961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.596 [2024-10-01 15:59:02.605970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.596 [2024-10-01 15:59:02.605977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.596 [2024-10-01 15:59:02.605983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.596 [2024-10-01 15:59:02.605992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.596 [2024-10-01 15:59:02.605997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.596 [2024-10-01 15:59:02.606004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.596 [2024-10-01 15:59:02.606017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.596 [2024-10-01 15:59:02.606024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.596 [2024-10-01 15:59:02.618161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.596 [2024-10-01 15:59:02.618182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.596 [2024-10-01 15:59:02.618368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.596 [2024-10-01 15:59:02.618380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.596 [2024-10-01 15:59:02.618388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.596 [2024-10-01 15:59:02.618527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.596 [2024-10-01 15:59:02.618536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.596 [2024-10-01 15:59:02.618543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.596 [2024-10-01 15:59:02.618565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.596 [2024-10-01 15:59:02.618575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.596 [2024-10-01 15:59:02.618584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.596 [2024-10-01 15:59:02.618590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.596 [2024-10-01 15:59:02.618596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.596 [2024-10-01 15:59:02.618605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.596 [2024-10-01 15:59:02.618611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.596 [2024-10-01 15:59:02.618617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.596 [2024-10-01 15:59:02.618630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.596 [2024-10-01 15:59:02.618637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.596 [2024-10-01 15:59:02.628872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.596 [2024-10-01 15:59:02.628893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.596 [2024-10-01 15:59:02.629077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.596 [2024-10-01 15:59:02.629089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.596 [2024-10-01 15:59:02.629096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.596 [2024-10-01 15:59:02.629290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.596 [2024-10-01 15:59:02.629299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.596 [2024-10-01 15:59:02.629306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.596 [2024-10-01 15:59:02.629317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.596 [2024-10-01 15:59:02.629327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.596 [2024-10-01 15:59:02.629336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.596 [2024-10-01 15:59:02.629342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.596 [2024-10-01 15:59:02.629348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.596 [2024-10-01 15:59:02.629357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.596 [2024-10-01 15:59:02.629362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.596 [2024-10-01 15:59:02.629369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.596 [2024-10-01 15:59:02.629382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.596 [2024-10-01 15:59:02.629389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.596 [2024-10-01 15:59:02.641151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.596 [2024-10-01 15:59:02.641171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.596 [2024-10-01 15:59:02.641337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.596 [2024-10-01 15:59:02.641349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.596 [2024-10-01 15:59:02.641356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.596 [2024-10-01 15:59:02.641549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.596 [2024-10-01 15:59:02.641559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.596 [2024-10-01 15:59:02.641565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.596 [2024-10-01 15:59:02.641577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.596 [2024-10-01 15:59:02.641585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.596 [2024-10-01 15:59:02.641595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.596 [2024-10-01 15:59:02.641601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.596 [2024-10-01 15:59:02.641607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.596 [2024-10-01 15:59:02.641616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.596 [2024-10-01 15:59:02.641622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.596 [2024-10-01 15:59:02.641627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.596 [2024-10-01 15:59:02.641640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.596 [2024-10-01 15:59:02.641647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.596 [2024-10-01 15:59:02.653430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.596 [2024-10-01 15:59:02.653451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.596 [2024-10-01 15:59:02.653756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.596 [2024-10-01 15:59:02.653771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.596 [2024-10-01 15:59:02.653779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.596 [2024-10-01 15:59:02.653996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.596 [2024-10-01 15:59:02.654007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.597 [2024-10-01 15:59:02.654014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.597 [2024-10-01 15:59:02.654297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.597 [2024-10-01 15:59:02.654311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.597 [2024-10-01 15:59:02.654462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.597 [2024-10-01 15:59:02.654472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.597 [2024-10-01 15:59:02.654478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.597 [2024-10-01 15:59:02.654487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.597 [2024-10-01 15:59:02.654497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.597 [2024-10-01 15:59:02.654503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.597 [2024-10-01 15:59:02.654534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.597 [2024-10-01 15:59:02.654542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.597 [2024-10-01 15:59:02.664492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.597 [2024-10-01 15:59:02.664513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.597 [2024-10-01 15:59:02.664753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.597 [2024-10-01 15:59:02.664766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.597 [2024-10-01 15:59:02.664774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.597 [2024-10-01 15:59:02.664990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.597 [2024-10-01 15:59:02.665001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.597 [2024-10-01 15:59:02.665007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.597 [2024-10-01 15:59:02.665248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.597 [2024-10-01 15:59:02.665260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.597 [2024-10-01 15:59:02.665409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.597 [2024-10-01 15:59:02.665419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.597 [2024-10-01 15:59:02.665426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.597 [2024-10-01 15:59:02.665435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.597 [2024-10-01 15:59:02.665441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.597 [2024-10-01 15:59:02.665448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.597 [2024-10-01 15:59:02.665477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.597 [2024-10-01 15:59:02.665484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.597 [2024-10-01 15:59:02.675913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.597 [2024-10-01 15:59:02.675933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.597 [2024-10-01 15:59:02.676154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.597 [2024-10-01 15:59:02.676166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.597 [2024-10-01 15:59:02.676173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.597 [2024-10-01 15:59:02.676390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.597 [2024-10-01 15:59:02.676400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.597 [2024-10-01 15:59:02.676407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.597 [2024-10-01 15:59:02.676418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.597 [2024-10-01 15:59:02.676431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.597 [2024-10-01 15:59:02.676441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.597 [2024-10-01 15:59:02.676448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.597 [2024-10-01 15:59:02.676453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.597 [2024-10-01 15:59:02.676462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.597 [2024-10-01 15:59:02.676468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.597 [2024-10-01 15:59:02.676474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.597 [2024-10-01 15:59:02.676488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.597 [2024-10-01 15:59:02.676494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.597 [2024-10-01 15:59:02.688308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.597 [2024-10-01 15:59:02.688330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.597 [2024-10-01 15:59:02.688543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.597 [2024-10-01 15:59:02.688556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.597 [2024-10-01 15:59:02.688563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.597 [2024-10-01 15:59:02.688731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.597 [2024-10-01 15:59:02.688740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.597 [2024-10-01 15:59:02.688747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.597 [2024-10-01 15:59:02.688759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.597 [2024-10-01 15:59:02.688768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.597 [2024-10-01 15:59:02.688778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.597 [2024-10-01 15:59:02.688784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.597 [2024-10-01 15:59:02.688790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.597 [2024-10-01 15:59:02.688799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.597 [2024-10-01 15:59:02.688805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.597 [2024-10-01 15:59:02.688811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.597 [2024-10-01 15:59:02.688825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.597 [2024-10-01 15:59:02.688831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.597 [2024-10-01 15:59:02.700076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.597 [2024-10-01 15:59:02.700098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.597 [2024-10-01 15:59:02.700285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.597 [2024-10-01 15:59:02.700301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.597 [2024-10-01 15:59:02.700309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.597 [2024-10-01 15:59:02.700526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.597 [2024-10-01 15:59:02.700537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.597 [2024-10-01 15:59:02.700543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.597 [2024-10-01 15:59:02.700555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.597 [2024-10-01 15:59:02.700564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.597 [2024-10-01 15:59:02.700574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.597 [2024-10-01 15:59:02.700580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.597 [2024-10-01 15:59:02.700586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.597 [2024-10-01 15:59:02.700595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.597 [2024-10-01 15:59:02.700600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.597 [2024-10-01 15:59:02.700606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.597 [2024-10-01 15:59:02.700620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.597 [2024-10-01 15:59:02.700626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.597 [2024-10-01 15:59:02.711719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.597 [2024-10-01 15:59:02.711741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.597 [2024-10-01 15:59:02.712013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.597 [2024-10-01 15:59:02.712028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.597 [2024-10-01 15:59:02.712036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.597 [2024-10-01 15:59:02.712178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.597 [2024-10-01 15:59:02.712187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.597 [2024-10-01 15:59:02.712194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.597 [2024-10-01 15:59:02.712206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.597 [2024-10-01 15:59:02.712215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.598 [2024-10-01 15:59:02.712225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.598 [2024-10-01 15:59:02.712231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.598 [2024-10-01 15:59:02.712237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.598 [2024-10-01 15:59:02.712246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.598 [2024-10-01 15:59:02.712251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.598 [2024-10-01 15:59:02.712261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.598 [2024-10-01 15:59:02.712274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.598 [2024-10-01 15:59:02.712281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.598 [2024-10-01 15:59:02.723868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.598 [2024-10-01 15:59:02.723889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.598 [2024-10-01 15:59:02.724126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.598 [2024-10-01 15:59:02.724139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.598 [2024-10-01 15:59:02.724146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.598 [2024-10-01 15:59:02.724335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.598 [2024-10-01 15:59:02.724345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.598 [2024-10-01 15:59:02.724352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.598 [2024-10-01 15:59:02.724364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.598 [2024-10-01 15:59:02.724373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.598 [2024-10-01 15:59:02.724392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.598 [2024-10-01 15:59:02.724399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.598 [2024-10-01 15:59:02.724405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.598 [2024-10-01 15:59:02.724413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.598 [2024-10-01 15:59:02.724420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.598 [2024-10-01 15:59:02.724426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.598 [2024-10-01 15:59:02.724439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.598 [2024-10-01 15:59:02.724446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.598 [2024-10-01 15:59:02.736016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.598 [2024-10-01 15:59:02.736037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.598 [2024-10-01 15:59:02.736229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.598 [2024-10-01 15:59:02.736242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.598 [2024-10-01 15:59:02.736249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.598 [2024-10-01 15:59:02.736488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.598 [2024-10-01 15:59:02.736499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.598 [2024-10-01 15:59:02.736505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.598 [2024-10-01 15:59:02.736517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.598 [2024-10-01 15:59:02.736527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.598 [2024-10-01 15:59:02.736540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.598 [2024-10-01 15:59:02.736546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.598 [2024-10-01 15:59:02.736552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.598 [2024-10-01 15:59:02.736561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.598 [2024-10-01 15:59:02.736566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.598 [2024-10-01 15:59:02.736573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.598 [2024-10-01 15:59:02.736586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.598 [2024-10-01 15:59:02.736592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.598 [2024-10-01 15:59:02.747116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.598 [2024-10-01 15:59:02.747136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.598 [2024-10-01 15:59:02.747377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.598 [2024-10-01 15:59:02.747395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.598 [2024-10-01 15:59:02.747403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.598 [2024-10-01 15:59:02.747536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.598 [2024-10-01 15:59:02.747545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.598 [2024-10-01 15:59:02.747552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.598 [2024-10-01 15:59:02.747563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.598 [2024-10-01 15:59:02.747573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.598 [2024-10-01 15:59:02.747582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.598 [2024-10-01 15:59:02.747588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.598 [2024-10-01 15:59:02.747595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.598 [2024-10-01 15:59:02.747603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.598 [2024-10-01 15:59:02.747608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.598 [2024-10-01 15:59:02.747615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.598 [2024-10-01 15:59:02.747628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.598 [2024-10-01 15:59:02.747634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.598 [2024-10-01 15:59:02.760034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.598 [2024-10-01 15:59:02.760055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.598 [2024-10-01 15:59:02.760422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.598 [2024-10-01 15:59:02.760438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.598 [2024-10-01 15:59:02.760449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.598 [2024-10-01 15:59:02.760590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.598 [2024-10-01 15:59:02.760599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.598 [2024-10-01 15:59:02.760605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.598 [2024-10-01 15:59:02.760749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.598 [2024-10-01 15:59:02.760762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.598 [2024-10-01 15:59:02.760905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.598 [2024-10-01 15:59:02.760916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.598 [2024-10-01 15:59:02.760923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.598 [2024-10-01 15:59:02.760932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.598 [2024-10-01 15:59:02.760938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.598 [2024-10-01 15:59:02.760944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.598 [2024-10-01 15:59:02.760974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.598 [2024-10-01 15:59:02.760981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.598 [2024-10-01 15:59:02.770455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.598 [2024-10-01 15:59:02.770476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.598 [2024-10-01 15:59:02.770706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.598 [2024-10-01 15:59:02.770719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.598 [2024-10-01 15:59:02.770726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.598 [2024-10-01 15:59:02.770942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.598 [2024-10-01 15:59:02.770953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.598 [2024-10-01 15:59:02.770960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.598 [2024-10-01 15:59:02.771200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.598 [2024-10-01 15:59:02.771213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.598 [2024-10-01 15:59:02.771250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.598 [2024-10-01 15:59:02.771257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.598 [2024-10-01 15:59:02.771263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.598 [2024-10-01 15:59:02.771272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.599 [2024-10-01 15:59:02.771277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.599 [2024-10-01 15:59:02.771283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.599 [2024-10-01 15:59:02.771416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.599 [2024-10-01 15:59:02.771425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.599 [2024-10-01 15:59:02.782300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.599 [2024-10-01 15:59:02.782320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.599 [2024-10-01 15:59:02.782537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.599 [2024-10-01 15:59:02.782549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.599 [2024-10-01 15:59:02.782556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.599 [2024-10-01 15:59:02.782641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.599 [2024-10-01 15:59:02.782650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.599 [2024-10-01 15:59:02.782657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.599 [2024-10-01 15:59:02.782668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.599 [2024-10-01 15:59:02.782677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.599 [2024-10-01 15:59:02.782687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.599 [2024-10-01 15:59:02.782693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.599 [2024-10-01 15:59:02.782699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.599 [2024-10-01 15:59:02.782707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.599 [2024-10-01 15:59:02.782714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.599 [2024-10-01 15:59:02.782720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.599 [2024-10-01 15:59:02.782733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.599 [2024-10-01 15:59:02.782740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.599 [2024-10-01 15:59:02.793431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.599 [2024-10-01 15:59:02.793451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.599 [2024-10-01 15:59:02.793662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.599 [2024-10-01 15:59:02.793675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.599 [2024-10-01 15:59:02.793683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.599 [2024-10-01 15:59:02.793884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.599 [2024-10-01 15:59:02.793894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.599 [2024-10-01 15:59:02.793901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.599 [2024-10-01 15:59:02.793913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.599 [2024-10-01 15:59:02.793922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.599 [2024-10-01 15:59:02.793932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.599 [2024-10-01 15:59:02.793942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.599 [2024-10-01 15:59:02.793948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.599 [2024-10-01 15:59:02.793957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.599 [2024-10-01 15:59:02.793963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.599 [2024-10-01 15:59:02.793969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.599 [2024-10-01 15:59:02.793983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.599 [2024-10-01 15:59:02.793990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.599 [2024-10-01 15:59:02.803956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.599 [2024-10-01 15:59:02.803977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.599 [2024-10-01 15:59:02.804213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.599 [2024-10-01 15:59:02.804227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.599 [2024-10-01 15:59:02.804235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.599 [2024-10-01 15:59:02.804441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.599 [2024-10-01 15:59:02.804451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.599 [2024-10-01 15:59:02.804457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.599 [2024-10-01 15:59:02.804469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.599 [2024-10-01 15:59:02.804478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.599 [2024-10-01 15:59:02.804495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.599 [2024-10-01 15:59:02.804502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.599 [2024-10-01 15:59:02.804509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.599 [2024-10-01 15:59:02.804518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.599 [2024-10-01 15:59:02.804524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.599 [2024-10-01 15:59:02.804530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.599 [2024-10-01 15:59:02.804543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.599 [2024-10-01 15:59:02.804550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.599 [2024-10-01 15:59:02.814774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.599 [2024-10-01 15:59:02.814795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.599 [2024-10-01 15:59:02.815012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.599 [2024-10-01 15:59:02.815026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.599 [2024-10-01 15:59:02.815034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.599 [2024-10-01 15:59:02.815138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.599 [2024-10-01 15:59:02.815148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.599 [2024-10-01 15:59:02.815155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.599 [2024-10-01 15:59:02.815854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.599 [2024-10-01 15:59:02.815874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.599 [2024-10-01 15:59:02.816342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.599 [2024-10-01 15:59:02.816353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.599 [2024-10-01 15:59:02.816360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.599 [2024-10-01 15:59:02.816370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.599 [2024-10-01 15:59:02.816376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.599 [2024-10-01 15:59:02.816382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.599 [2024-10-01 15:59:02.816681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.599 [2024-10-01 15:59:02.816691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.599 [2024-10-01 15:59:02.826306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.599 [2024-10-01 15:59:02.826328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.599 [2024-10-01 15:59:02.826561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.599 [2024-10-01 15:59:02.826574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.599 [2024-10-01 15:59:02.826582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.599 [2024-10-01 15:59:02.826776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.599 [2024-10-01 15:59:02.826787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.599 [2024-10-01 15:59:02.826794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.599 [2024-10-01 15:59:02.826806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.599 [2024-10-01 15:59:02.826816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.599 [2024-10-01 15:59:02.826825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.599 [2024-10-01 15:59:02.826832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.599 [2024-10-01 15:59:02.826838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.599 [2024-10-01 15:59:02.826846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.599 [2024-10-01 15:59:02.826852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.599 [2024-10-01 15:59:02.826859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.600 [2024-10-01 15:59:02.826877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.600 [2024-10-01 15:59:02.826884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.600 [2024-10-01 15:59:02.838183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.600 [2024-10-01 15:59:02.838204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.600 [2024-10-01 15:59:02.838313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.600 [2024-10-01 15:59:02.838326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.600 [2024-10-01 15:59:02.838333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.600 [2024-10-01 15:59:02.838549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.600 [2024-10-01 15:59:02.838559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.600 [2024-10-01 15:59:02.838565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.600 [2024-10-01 15:59:02.838577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.600 [2024-10-01 15:59:02.838586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.600 [2024-10-01 15:59:02.838595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.600 [2024-10-01 15:59:02.838601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.600 [2024-10-01 15:59:02.838607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.600 [2024-10-01 15:59:02.838615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.600 [2024-10-01 15:59:02.838621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.600 [2024-10-01 15:59:02.838627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.600 [2024-10-01 15:59:02.838641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.600 [2024-10-01 15:59:02.838647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.600 [2024-10-01 15:59:02.851022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.600 [2024-10-01 15:59:02.851043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.600 [2024-10-01 15:59:02.851284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.600 [2024-10-01 15:59:02.851304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.600 [2024-10-01 15:59:02.851311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.600 [2024-10-01 15:59:02.851504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.600 [2024-10-01 15:59:02.851515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.600 [2024-10-01 15:59:02.851522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.600 [2024-10-01 15:59:02.851533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.600 [2024-10-01 15:59:02.851542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.600 [2024-10-01 15:59:02.851552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.600 [2024-10-01 15:59:02.851558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.600 [2024-10-01 15:59:02.851567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.600 [2024-10-01 15:59:02.851576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.600 [2024-10-01 15:59:02.851582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.600 [2024-10-01 15:59:02.851588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.600 [2024-10-01 15:59:02.851601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.600 [2024-10-01 15:59:02.851607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.600 [2024-10-01 15:59:02.862411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.600 [2024-10-01 15:59:02.862432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.600 [2024-10-01 15:59:02.862683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.600 [2024-10-01 15:59:02.862697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.600 [2024-10-01 15:59:02.862705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.600 [2024-10-01 15:59:02.862920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.600 [2024-10-01 15:59:02.862932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.600 [2024-10-01 15:59:02.862939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.600 [2024-10-01 15:59:02.862951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.600 [2024-10-01 15:59:02.862960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.600 [2024-10-01 15:59:02.862969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.600 [2024-10-01 15:59:02.862976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.600 [2024-10-01 15:59:02.862982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.600 [2024-10-01 15:59:02.862990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.600 [2024-10-01 15:59:02.862996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.600 [2024-10-01 15:59:02.863002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.600 [2024-10-01 15:59:02.863016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.600 [2024-10-01 15:59:02.863022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.600 [2024-10-01 15:59:02.872992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.600 [2024-10-01 15:59:02.873012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.600 [2024-10-01 15:59:02.873199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.600 [2024-10-01 15:59:02.873211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.600 [2024-10-01 15:59:02.873218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.600 [2024-10-01 15:59:02.873384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.600 [2024-10-01 15:59:02.873393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.600 [2024-10-01 15:59:02.873403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.600 [2024-10-01 15:59:02.873415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.600 [2024-10-01 15:59:02.873423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.600 [2024-10-01 15:59:02.873433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.600 [2024-10-01 15:59:02.873440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.600 [2024-10-01 15:59:02.873446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.600 [2024-10-01 15:59:02.873454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.600 [2024-10-01 15:59:02.873460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.600 [2024-10-01 15:59:02.873466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.600 [2024-10-01 15:59:02.873479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.600 [2024-10-01 15:59:02.873486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.600 [2024-10-01 15:59:02.884857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.600 [2024-10-01 15:59:02.884883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.600 [2024-10-01 15:59:02.885041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.600 [2024-10-01 15:59:02.885053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.600 [2024-10-01 15:59:02.885060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.600 [2024-10-01 15:59:02.885229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.601 [2024-10-01 15:59:02.885240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.601 [2024-10-01 15:59:02.885247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.601 [2024-10-01 15:59:02.885259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.601 [2024-10-01 15:59:02.885268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.601 [2024-10-01 15:59:02.885278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.601 [2024-10-01 15:59:02.885284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.601 [2024-10-01 15:59:02.885290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.601 [2024-10-01 15:59:02.885299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.601 [2024-10-01 15:59:02.885305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.601 [2024-10-01 15:59:02.885311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.601 [2024-10-01 15:59:02.885324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.601 [2024-10-01 15:59:02.885330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.601 [2024-10-01 15:59:02.895699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.601 [2024-10-01 15:59:02.895723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.601 [2024-10-01 15:59:02.895887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.601 [2024-10-01 15:59:02.895900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.601 [2024-10-01 15:59:02.895908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.601 [2024-10-01 15:59:02.896100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.601 [2024-10-01 15:59:02.896110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.601 [2024-10-01 15:59:02.896117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.601 [2024-10-01 15:59:02.896129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.601 [2024-10-01 15:59:02.896138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.601 [2024-10-01 15:59:02.896148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.601 [2024-10-01 15:59:02.896154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.601 [2024-10-01 15:59:02.896161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.601 [2024-10-01 15:59:02.896169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.601 [2024-10-01 15:59:02.896175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.601 [2024-10-01 15:59:02.896181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.601 [2024-10-01 15:59:02.896194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.601 [2024-10-01 15:59:02.896201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.601 [2024-10-01 15:59:02.907576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.601 [2024-10-01 15:59:02.907598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.601 [2024-10-01 15:59:02.907757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.601 [2024-10-01 15:59:02.907770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.601 [2024-10-01 15:59:02.907778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.601 [2024-10-01 15:59:02.907915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.601 [2024-10-01 15:59:02.907926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.601 [2024-10-01 15:59:02.907933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.601 [2024-10-01 15:59:02.907945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.601 [2024-10-01 15:59:02.907955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.601 [2024-10-01 15:59:02.907965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.601 [2024-10-01 15:59:02.907972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.601 [2024-10-01 15:59:02.907978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.601 [2024-10-01 15:59:02.907990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.601 [2024-10-01 15:59:02.907996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.601 [2024-10-01 15:59:02.908002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.601 [2024-10-01 15:59:02.908016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.601 [2024-10-01 15:59:02.908023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.601 [2024-10-01 15:59:02.918177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.601 [2024-10-01 15:59:02.918198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.601 [2024-10-01 15:59:02.918370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.601 [2024-10-01 15:59:02.918383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.601 [2024-10-01 15:59:02.918391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.601 [2024-10-01 15:59:02.918551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.601 [2024-10-01 15:59:02.918561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.601 [2024-10-01 15:59:02.918569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.601 [2024-10-01 15:59:02.918581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.601 [2024-10-01 15:59:02.918590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.601 [2024-10-01 15:59:02.918600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.601 [2024-10-01 15:59:02.918607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.601 [2024-10-01 15:59:02.918614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.601 [2024-10-01 15:59:02.918623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.601 [2024-10-01 15:59:02.918629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.601 [2024-10-01 15:59:02.918635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.601 [2024-10-01 15:59:02.918648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.601 [2024-10-01 15:59:02.918655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.601 [2024-10-01 15:59:02.931345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.601 [2024-10-01 15:59:02.931366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.601 [2024-10-01 15:59:02.931601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.601 [2024-10-01 15:59:02.931614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.601 [2024-10-01 15:59:02.931622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.601 [2024-10-01 15:59:02.931838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.601 [2024-10-01 15:59:02.931849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.602 [2024-10-01 15:59:02.931857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.602 [2024-10-01 15:59:02.931878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.602 [2024-10-01 15:59:02.931888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.602 [2024-10-01 15:59:02.931898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.602 [2024-10-01 15:59:02.931905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.602 [2024-10-01 15:59:02.931911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.602 [2024-10-01 15:59:02.931920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.602 [2024-10-01 15:59:02.931925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.602 [2024-10-01 15:59:02.931931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.602 [2024-10-01 15:59:02.931945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.602 [2024-10-01 15:59:02.931952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.602 [2024-10-01 15:59:02.942354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.602 [2024-10-01 15:59:02.942375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.602 [2024-10-01 15:59:02.942607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.602 [2024-10-01 15:59:02.942619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.602 [2024-10-01 15:59:02.942627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.602 [2024-10-01 15:59:02.942766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.602 [2024-10-01 15:59:02.942775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.602 [2024-10-01 15:59:02.942782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.602 [2024-10-01 15:59:02.942794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.602 [2024-10-01 15:59:02.942803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.602 [2024-10-01 15:59:02.942813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.602 [2024-10-01 15:59:02.942819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.602 [2024-10-01 15:59:02.942826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.602 [2024-10-01 15:59:02.942834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.602 [2024-10-01 15:59:02.942840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.602 [2024-10-01 15:59:02.942847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.602 [2024-10-01 15:59:02.942860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.602 [2024-10-01 15:59:02.942873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.602 [2024-10-01 15:59:02.953248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.602 [2024-10-01 15:59:02.953269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.602 [2024-10-01 15:59:02.953487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.602 [2024-10-01 15:59:02.953499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.602 [2024-10-01 15:59:02.953507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.602 [2024-10-01 15:59:02.953722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.602 [2024-10-01 15:59:02.953732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.602 [2024-10-01 15:59:02.953739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.602 [2024-10-01 15:59:02.953750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.602 [2024-10-01 15:59:02.953760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.602 [2024-10-01 15:59:02.953769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.602 [2024-10-01 15:59:02.953775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.602 [2024-10-01 15:59:02.953782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.602 [2024-10-01 15:59:02.953791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.602 [2024-10-01 15:59:02.953797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.602 [2024-10-01 15:59:02.953803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.602 [2024-10-01 15:59:02.953816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.602 [2024-10-01 15:59:02.953823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.602 [2024-10-01 15:59:02.964188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.602 [2024-10-01 15:59:02.964210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.602 [2024-10-01 15:59:02.964409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.602 [2024-10-01 15:59:02.964422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.602 [2024-10-01 15:59:02.964429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.602 [2024-10-01 15:59:02.964572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.602 [2024-10-01 15:59:02.964582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.602 [2024-10-01 15:59:02.964589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.602 [2024-10-01 15:59:02.964600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.602 [2024-10-01 15:59:02.964609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.602 [2024-10-01 15:59:02.964619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.602 [2024-10-01 15:59:02.964625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.602 [2024-10-01 15:59:02.964632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.602 [2024-10-01 15:59:02.964640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.602 [2024-10-01 15:59:02.964650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.602 [2024-10-01 15:59:02.964656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.602 [2024-10-01 15:59:02.964669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.602 [2024-10-01 15:59:02.964675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.602 [2024-10-01 15:59:02.975046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.602 [2024-10-01 15:59:02.975068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.602 [2024-10-01 15:59:02.975325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.602 [2024-10-01 15:59:02.975338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.602 [2024-10-01 15:59:02.975345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.602 [2024-10-01 15:59:02.975488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.602 [2024-10-01 15:59:02.975497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.602 [2024-10-01 15:59:02.975504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.602 [2024-10-01 15:59:02.975515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.602 [2024-10-01 15:59:02.975525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.602 [2024-10-01 15:59:02.975535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.602 [2024-10-01 15:59:02.975541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.602 [2024-10-01 15:59:02.975547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.602 [2024-10-01 15:59:02.975555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.602 [2024-10-01 15:59:02.975561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.602 [2024-10-01 15:59:02.975567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.602 [2024-10-01 15:59:02.975581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.602 [2024-10-01 15:59:02.975587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.602 [2024-10-01 15:59:02.987415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.602 [2024-10-01 15:59:02.987437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.602 [2024-10-01 15:59:02.987701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.602 [2024-10-01 15:59:02.987715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.602 [2024-10-01 15:59:02.987722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.602 [2024-10-01 15:59:02.987918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.602 [2024-10-01 15:59:02.987929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.602 [2024-10-01 15:59:02.987936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.602 [2024-10-01 15:59:02.988774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.602 [2024-10-01 15:59:02.988792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.603 [2024-10-01 15:59:02.989144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.603 [2024-10-01 15:59:02.989156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.603 [2024-10-01 15:59:02.989163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.603 [2024-10-01 15:59:02.989173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.603 [2024-10-01 15:59:02.989179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.603 [2024-10-01 15:59:02.989185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.603 [2024-10-01 15:59:02.989340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.603 [2024-10-01 15:59:02.989349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.603 11355.82 IOPS, 44.36 MiB/s [2024-10-01 15:59:02.999765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.603 [2024-10-01 15:59:02.999783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.603 [2024-10-01 15:59:03.000053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.603 [2024-10-01 15:59:03.000067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.603 [2024-10-01 15:59:03.000075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.603 [2024-10-01 15:59:03.000290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.603 [2024-10-01 15:59:03.000301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.603 [2024-10-01 15:59:03.000307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.603 [2024-10-01 15:59:03.000320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.603 [2024-10-01 15:59:03.000329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.603 [2024-10-01 15:59:03.000339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.603 [2024-10-01 15:59:03.000345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.603 [2024-10-01 15:59:03.000351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.603 [2024-10-01 15:59:03.000359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.603 [2024-10-01 15:59:03.000365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.603 [2024-10-01 15:59:03.000372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.603 [2024-10-01 15:59:03.000385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.603 [2024-10-01 15:59:03.000391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.603 [2024-10-01 15:59:03.010566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.603 [2024-10-01 15:59:03.010588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.603 [2024-10-01 15:59:03.010827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.603 [2024-10-01 15:59:03.010851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.603 [2024-10-01 15:59:03.010859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.603 [2024-10-01 15:59:03.011009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.603 [2024-10-01 15:59:03.011019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.603 [2024-10-01 15:59:03.011026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.603 [2024-10-01 15:59:03.011038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.603 [2024-10-01 15:59:03.011047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.603 [2024-10-01 15:59:03.011057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.603 [2024-10-01 15:59:03.011063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.603 [2024-10-01 15:59:03.011069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.603 [2024-10-01 15:59:03.011078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.603 [2024-10-01 15:59:03.011084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.603 [2024-10-01 15:59:03.011090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.603 [2024-10-01 15:59:03.011104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.603 [2024-10-01 15:59:03.011110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.603 [2024-10-01 15:59:03.021910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.603 [2024-10-01 15:59:03.021931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.603 [2024-10-01 15:59:03.022175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.603 [2024-10-01 15:59:03.022188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.603 [2024-10-01 15:59:03.022196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.603 [2024-10-01 15:59:03.022387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.603 [2024-10-01 15:59:03.022397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.603 [2024-10-01 15:59:03.022404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.603 [2024-10-01 15:59:03.022416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.603 [2024-10-01 15:59:03.022425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.603 [2024-10-01 15:59:03.022435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.603 [2024-10-01 15:59:03.022442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.603 [2024-10-01 15:59:03.022448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.603 [2024-10-01 15:59:03.022456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.603 [2024-10-01 15:59:03.022462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.603 [2024-10-01 15:59:03.022472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.603 [2024-10-01 15:59:03.022485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.603 [2024-10-01 15:59:03.022492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.603 [2024-10-01 15:59:03.033597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.603 [2024-10-01 15:59:03.033618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.603 [2024-10-01 15:59:03.033853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.603 [2024-10-01 15:59:03.033870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.603 [2024-10-01 15:59:03.033878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.603 [2024-10-01 15:59:03.034016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.603 [2024-10-01 15:59:03.034026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.603 [2024-10-01 15:59:03.034033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.603 [2024-10-01 15:59:03.034044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.603 [2024-10-01 15:59:03.034053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.603 [2024-10-01 15:59:03.034064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.604 [2024-10-01 15:59:03.034070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.604 [2024-10-01 15:59:03.034077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.604 [2024-10-01 15:59:03.034085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.604 [2024-10-01 15:59:03.034091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.604 [2024-10-01 15:59:03.034097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.604 [2024-10-01 15:59:03.034110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.604 [2024-10-01 15:59:03.034117] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.604 [2024-10-01 15:59:03.045052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.604 [2024-10-01 15:59:03.045073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.604 [2024-10-01 15:59:03.045256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.604 [2024-10-01 15:59:03.045270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.604 [2024-10-01 15:59:03.045277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.604 [2024-10-01 15:59:03.045493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.604 [2024-10-01 15:59:03.045503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.604 [2024-10-01 15:59:03.045509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.604 [2024-10-01 15:59:03.045588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.604 [2024-10-01 15:59:03.045601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.604 [2024-10-01 15:59:03.045677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.604 [2024-10-01 15:59:03.045684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.604 [2024-10-01 15:59:03.045690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.604 [2024-10-01 15:59:03.045699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.604 [2024-10-01 15:59:03.045705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.604 [2024-10-01 15:59:03.045711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.604 [2024-10-01 15:59:03.047416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.604 [2024-10-01 15:59:03.047433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.604 [2024-10-01 15:59:03.057279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.604 [2024-10-01 15:59:03.057301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.604 [2024-10-01 15:59:03.057857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.604 [2024-10-01 15:59:03.057879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.604 [2024-10-01 15:59:03.057887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.604 [2024-10-01 15:59:03.058157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.604 [2024-10-01 15:59:03.058166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.604 [2024-10-01 15:59:03.058173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.604 [2024-10-01 15:59:03.058446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.604 [2024-10-01 15:59:03.058458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.604 [2024-10-01 15:59:03.058494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.604 [2024-10-01 15:59:03.058501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.604 [2024-10-01 15:59:03.058508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.604 [2024-10-01 15:59:03.058517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.604 [2024-10-01 15:59:03.058523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.604 [2024-10-01 15:59:03.058529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.604 [2024-10-01 15:59:03.058542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.604 [2024-10-01 15:59:03.058548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.604 [2024-10-01 15:59:03.067360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.604 [2024-10-01 15:59:03.067391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.604 [2024-10-01 15:59:03.067545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.604 [2024-10-01 15:59:03.067558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.604 [2024-10-01 15:59:03.067569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.604 [2024-10-01 15:59:03.067791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.604 [2024-10-01 15:59:03.067803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.604 [2024-10-01 15:59:03.067810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.604 [2024-10-01 15:59:03.067818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.604 [2024-10-01 15:59:03.067830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.604 [2024-10-01 15:59:03.067838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.604 [2024-10-01 15:59:03.067844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.604 [2024-10-01 15:59:03.067850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.604 [2024-10-01 15:59:03.067868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.604 [2024-10-01 15:59:03.067875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.604 [2024-10-01 15:59:03.067881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.604 [2024-10-01 15:59:03.067887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.604 [2024-10-01 15:59:03.067898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.604 [2024-10-01 15:59:03.077426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.604 [2024-10-01 15:59:03.077674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.604 [2024-10-01 15:59:03.077690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.604 [2024-10-01 15:59:03.077698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.604 [2024-10-01 15:59:03.077718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.604 [2024-10-01 15:59:03.077731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.604 [2024-10-01 15:59:03.077743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.604 [2024-10-01 15:59:03.077750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.604 [2024-10-01 15:59:03.077757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.604 [2024-10-01 15:59:03.077768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.604 [2024-10-01 15:59:03.077915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.604 [2024-10-01 15:59:03.077927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.604 [2024-10-01 15:59:03.077933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.604 [2024-10-01 15:59:03.078118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.604 [2024-10-01 15:59:03.078149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.604 [2024-10-01 15:59:03.078157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.604 [2024-10-01 15:59:03.078167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.604 [2024-10-01 15:59:03.078181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.604 [2024-10-01 15:59:03.089079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.604 [2024-10-01 15:59:03.089101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.604 [2024-10-01 15:59:03.089455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.604 [2024-10-01 15:59:03.089471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.604 [2024-10-01 15:59:03.089478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.604 [2024-10-01 15:59:03.089681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.604 [2024-10-01 15:59:03.089691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.604 [2024-10-01 15:59:03.089698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.604 [2024-10-01 15:59:03.089956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.604 [2024-10-01 15:59:03.089971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.604 [2024-10-01 15:59:03.090120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.604 [2024-10-01 15:59:03.090130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.605 [2024-10-01 15:59:03.090137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.605 [2024-10-01 15:59:03.090146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.605 [2024-10-01 15:59:03.090152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.605 [2024-10-01 15:59:03.090158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.605 [2024-10-01 15:59:03.090188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.605 [2024-10-01 15:59:03.090195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.605 [2024-10-01 15:59:03.100268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.605 [2024-10-01 15:59:03.100290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.605 [2024-10-01 15:59:03.100558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.605 [2024-10-01 15:59:03.100571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.605 [2024-10-01 15:59:03.100578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.605 [2024-10-01 15:59:03.100671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.605 [2024-10-01 15:59:03.100680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.605 [2024-10-01 15:59:03.100687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.605 [2024-10-01 15:59:03.100815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.605 [2024-10-01 15:59:03.100827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.605 [2024-10-01 15:59:03.100988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.605 [2024-10-01 15:59:03.100998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.605 [2024-10-01 15:59:03.101005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.605 [2024-10-01 15:59:03.101014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.605 [2024-10-01 15:59:03.101020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.605 [2024-10-01 15:59:03.101026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.605 [2024-10-01 15:59:03.101168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.605 [2024-10-01 15:59:03.101179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.605 [2024-10-01 15:59:03.111021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.605 [2024-10-01 15:59:03.111042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.605 [2024-10-01 15:59:03.111253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.605 [2024-10-01 15:59:03.111267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.605 [2024-10-01 15:59:03.111275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.605 [2024-10-01 15:59:03.111423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.605 [2024-10-01 15:59:03.111433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.605 [2024-10-01 15:59:03.111440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.605 [2024-10-01 15:59:03.111451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.605 [2024-10-01 15:59:03.111460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.605 [2024-10-01 15:59:03.111469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.605 [2024-10-01 15:59:03.111476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.605 [2024-10-01 15:59:03.111482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.605 [2024-10-01 15:59:03.111490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.605 [2024-10-01 15:59:03.111496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.605 [2024-10-01 15:59:03.111502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.605 [2024-10-01 15:59:03.111516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.605 [2024-10-01 15:59:03.111522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.605 [2024-10-01 15:59:03.122169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.605 [2024-10-01 15:59:03.122191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.605 [2024-10-01 15:59:03.122307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.605 [2024-10-01 15:59:03.122320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.605 [2024-10-01 15:59:03.122327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.605 [2024-10-01 15:59:03.122549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.605 [2024-10-01 15:59:03.122559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.605 [2024-10-01 15:59:03.122566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.605 [2024-10-01 15:59:03.122577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.605 [2024-10-01 15:59:03.122586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.605 [2024-10-01 15:59:03.122596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.605 [2024-10-01 15:59:03.122602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.605 [2024-10-01 15:59:03.122608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.605 [2024-10-01 15:59:03.122616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.605 [2024-10-01 15:59:03.122622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.605 [2024-10-01 15:59:03.122628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.605 [2024-10-01 15:59:03.122641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.605 [2024-10-01 15:59:03.122648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.605 [2024-10-01 15:59:03.133368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.605 [2024-10-01 15:59:03.133389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.605 [2024-10-01 15:59:03.133576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.605 [2024-10-01 15:59:03.133588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.605 [2024-10-01 15:59:03.133596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.605 [2024-10-01 15:59:03.133810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.605 [2024-10-01 15:59:03.133820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.605 [2024-10-01 15:59:03.133827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.605 [2024-10-01 15:59:03.134484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.605 [2024-10-01 15:59:03.134499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.605 [2024-10-01 15:59:03.134635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.605 [2024-10-01 15:59:03.134643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.605 [2024-10-01 15:59:03.134649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.605 [2024-10-01 15:59:03.134658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.606 [2024-10-01 15:59:03.134663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.606 [2024-10-01 15:59:03.134670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.606 [2024-10-01 15:59:03.135491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.606 [2024-10-01 15:59:03.135508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.606 [2024-10-01 15:59:03.143808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.606 [2024-10-01 15:59:03.143829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.606 [2024-10-01 15:59:03.144083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.606 [2024-10-01 15:59:03.144097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.606 [2024-10-01 15:59:03.144104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.606 [2024-10-01 15:59:03.144317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.606 [2024-10-01 15:59:03.144327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.606 [2024-10-01 15:59:03.144334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.606 [2024-10-01 15:59:03.144527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.606 [2024-10-01 15:59:03.144539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.606 [2024-10-01 15:59:03.144572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.606 [2024-10-01 15:59:03.144580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.606 [2024-10-01 15:59:03.144586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.606 [2024-10-01 15:59:03.144596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.606 [2024-10-01 15:59:03.144601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.606 [2024-10-01 15:59:03.144607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.606 [2024-10-01 15:59:03.144621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.606 [2024-10-01 15:59:03.144627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.606 [2024-10-01 15:59:03.154646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.606 [2024-10-01 15:59:03.154668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.606 [2024-10-01 15:59:03.154850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.606 [2024-10-01 15:59:03.154869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.606 [2024-10-01 15:59:03.154877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.606 [2024-10-01 15:59:03.154972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.606 [2024-10-01 15:59:03.154982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.606 [2024-10-01 15:59:03.154988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.606 [2024-10-01 15:59:03.155000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.606 [2024-10-01 15:59:03.155009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.606 [2024-10-01 15:59:03.155019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.606 [2024-10-01 15:59:03.155025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.606 [2024-10-01 15:59:03.155041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.606 [2024-10-01 15:59:03.155050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.606 [2024-10-01 15:59:03.155055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.606 [2024-10-01 15:59:03.155061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.606 [2024-10-01 15:59:03.155176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.606 [2024-10-01 15:59:03.155186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.606 [2024-10-01 15:59:03.165536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.606 [2024-10-01 15:59:03.165558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.606 [2024-10-01 15:59:03.165739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.606 [2024-10-01 15:59:03.165752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.606 [2024-10-01 15:59:03.165760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.606 [2024-10-01 15:59:03.165985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.606 [2024-10-01 15:59:03.165997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.606 [2024-10-01 15:59:03.166003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.606 [2024-10-01 15:59:03.166016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.606 [2024-10-01 15:59:03.166025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.606 [2024-10-01 15:59:03.166035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.606 [2024-10-01 15:59:03.166040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.606 [2024-10-01 15:59:03.166047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.606 [2024-10-01 15:59:03.166055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.606 [2024-10-01 15:59:03.166061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.606 [2024-10-01 15:59:03.166066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.606 [2024-10-01 15:59:03.166080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.606 [2024-10-01 15:59:03.166087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.606 [2024-10-01 15:59:03.176094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.606 [2024-10-01 15:59:03.176115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.606 [2024-10-01 15:59:03.176341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.606 [2024-10-01 15:59:03.176357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.606 [2024-10-01 15:59:03.176367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.606 [2024-10-01 15:59:03.176636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.606 [2024-10-01 15:59:03.176651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.606 [2024-10-01 15:59:03.176658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.606 [2024-10-01 15:59:03.177017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.606 [2024-10-01 15:59:03.177033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.606 [2024-10-01 15:59:03.177083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.606 [2024-10-01 15:59:03.177091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.606 [2024-10-01 15:59:03.177098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.606 [2024-10-01 15:59:03.177107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.606 [2024-10-01 15:59:03.177113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.606 [2024-10-01 15:59:03.177119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.606 [2024-10-01 15:59:03.177309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.606 [2024-10-01 15:59:03.177321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.606 [2024-10-01 15:59:03.187653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.606 [2024-10-01 15:59:03.187675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.607 [2024-10-01 15:59:03.187906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.607 [2024-10-01 15:59:03.187920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.607 [2024-10-01 15:59:03.187927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.607 [2024-10-01 15:59:03.188077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.607 [2024-10-01 15:59:03.188087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.607 [2024-10-01 15:59:03.188093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.607 [2024-10-01 15:59:03.188392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.607 [2024-10-01 15:59:03.188407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.607 [2024-10-01 15:59:03.188559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.607 [2024-10-01 15:59:03.188569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.607 [2024-10-01 15:59:03.188576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.607 [2024-10-01 15:59:03.188585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.607 [2024-10-01 15:59:03.188591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.607 [2024-10-01 15:59:03.188597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.607 [2024-10-01 15:59:03.188627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.607 [2024-10-01 15:59:03.188635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.607 [2024-10-01 15:59:03.198827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.607 [2024-10-01 15:59:03.198848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.607 [2024-10-01 15:59:03.199043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.607 [2024-10-01 15:59:03.199056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.607 [2024-10-01 15:59:03.199063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.607 [2024-10-01 15:59:03.199159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.607 [2024-10-01 15:59:03.199169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.607 [2024-10-01 15:59:03.199175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.607 [2024-10-01 15:59:03.199514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.607 [2024-10-01 15:59:03.199528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.607 [2024-10-01 15:59:03.199685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.607 [2024-10-01 15:59:03.199695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.607 [2024-10-01 15:59:03.199702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.607 [2024-10-01 15:59:03.199711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.607 [2024-10-01 15:59:03.199717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.607 [2024-10-01 15:59:03.199724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.607 [2024-10-01 15:59:03.199908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.607 [2024-10-01 15:59:03.199919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.607 [2024-10-01 15:59:03.210246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.607 [2024-10-01 15:59:03.210267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.607 [2024-10-01 15:59:03.210425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.607 [2024-10-01 15:59:03.210437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.607 [2024-10-01 15:59:03.210445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.607 [2024-10-01 15:59:03.210540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.607 [2024-10-01 15:59:03.210549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.607 [2024-10-01 15:59:03.210556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.607 [2024-10-01 15:59:03.210567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.607 [2024-10-01 15:59:03.210576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.607 [2024-10-01 15:59:03.210585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.607 [2024-10-01 15:59:03.210592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.607 [2024-10-01 15:59:03.210602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.607 [2024-10-01 15:59:03.210611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.607 [2024-10-01 15:59:03.210616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.607 [2024-10-01 15:59:03.210622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.607 [2024-10-01 15:59:03.210635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.607 [2024-10-01 15:59:03.210642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.607 [2024-10-01 15:59:03.221568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.607 [2024-10-01 15:59:03.221590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.607 [2024-10-01 15:59:03.221710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.607 [2024-10-01 15:59:03.221722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.607 [2024-10-01 15:59:03.221729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.607 [2024-10-01 15:59:03.221811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.607 [2024-10-01 15:59:03.221821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.607 [2024-10-01 15:59:03.221828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.607 [2024-10-01 15:59:03.221839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.607 [2024-10-01 15:59:03.221848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.607 [2024-10-01 15:59:03.221858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.607 [2024-10-01 15:59:03.221871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.607 [2024-10-01 15:59:03.221878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.607 [2024-10-01 15:59:03.221886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.607 [2024-10-01 15:59:03.221891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.607 [2024-10-01 15:59:03.221897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.607 [2024-10-01 15:59:03.221911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.607 [2024-10-01 15:59:03.221917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.607 [2024-10-01 15:59:03.233241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.607 [2024-10-01 15:59:03.233262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.607 [2024-10-01 15:59:03.233587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.607 [2024-10-01 15:59:03.233603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.607 [2024-10-01 15:59:03.233611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.607 [2024-10-01 15:59:03.233782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.607 [2024-10-01 15:59:03.233792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.607 [2024-10-01 15:59:03.233803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.607 [2024-10-01 15:59:03.233986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.607 [2024-10-01 15:59:03.234001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.607 [2024-10-01 15:59:03.234141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.608 [2024-10-01 15:59:03.234151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.608 [2024-10-01 15:59:03.234158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.608 [2024-10-01 15:59:03.234167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.608 [2024-10-01 15:59:03.234173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.608 [2024-10-01 15:59:03.234179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.608 [2024-10-01 15:59:03.234209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.608 [2024-10-01 15:59:03.234217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.608 [2024-10-01 15:59:03.244399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.608 [2024-10-01 15:59:03.244421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.608 [2024-10-01 15:59:03.244523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.608 [2024-10-01 15:59:03.244536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.608 [2024-10-01 15:59:03.244544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.608 [2024-10-01 15:59:03.244681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.608 [2024-10-01 15:59:03.244691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.608 [2024-10-01 15:59:03.244697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.608 [2024-10-01 15:59:03.244848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.608 [2024-10-01 15:59:03.244861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.608 [2024-10-01 15:59:03.244957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.608 [2024-10-01 15:59:03.244966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.608 [2024-10-01 15:59:03.244972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.608 [2024-10-01 15:59:03.244982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.608 [2024-10-01 15:59:03.244988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.608 [2024-10-01 15:59:03.244993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.608 [2024-10-01 15:59:03.245109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.608 [2024-10-01 15:59:03.245118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.608 [2024-10-01 15:59:03.254714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.608 [2024-10-01 15:59:03.254735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.608 [2024-10-01 15:59:03.254833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.608 [2024-10-01 15:59:03.254845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.608 [2024-10-01 15:59:03.254853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.608 [2024-10-01 15:59:03.254957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.608 [2024-10-01 15:59:03.254967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.608 [2024-10-01 15:59:03.254973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.608 [2024-10-01 15:59:03.254985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.608 [2024-10-01 15:59:03.254994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.608 [2024-10-01 15:59:03.255004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.608 [2024-10-01 15:59:03.255009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.608 [2024-10-01 15:59:03.255016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.608 [2024-10-01 15:59:03.255025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.608 [2024-10-01 15:59:03.255031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.608 [2024-10-01 15:59:03.255038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.608 [2024-10-01 15:59:03.255051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.608 [2024-10-01 15:59:03.255057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.608 [2024-10-01 15:59:03.265600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.608 [2024-10-01 15:59:03.265622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.608 [2024-10-01 15:59:03.265788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.608 [2024-10-01 15:59:03.265800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.608 [2024-10-01 15:59:03.265808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.608 [2024-10-01 15:59:03.265884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.608 [2024-10-01 15:59:03.265894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.608 [2024-10-01 15:59:03.265902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.608 [2024-10-01 15:59:03.265913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.608 [2024-10-01 15:59:03.265922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.608 [2024-10-01 15:59:03.265932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.608 [2024-10-01 15:59:03.265938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.608 [2024-10-01 15:59:03.265945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.608 [2024-10-01 15:59:03.265953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.608 [2024-10-01 15:59:03.265963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.608 [2024-10-01 15:59:03.265969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.608 [2024-10-01 15:59:03.265982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.608 [2024-10-01 15:59:03.265989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.608 [2024-10-01 15:59:03.276543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.608 [2024-10-01 15:59:03.276565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.608 [2024-10-01 15:59:03.276693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.608 [2024-10-01 15:59:03.276706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.608 [2024-10-01 15:59:03.276714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.608 [2024-10-01 15:59:03.276927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.608 [2024-10-01 15:59:03.276938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.608 [2024-10-01 15:59:03.276945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.608 [2024-10-01 15:59:03.276956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.608 [2024-10-01 15:59:03.276965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.608 [2024-10-01 15:59:03.276975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.608 [2024-10-01 15:59:03.276981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.608 [2024-10-01 15:59:03.276987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.608 [2024-10-01 15:59:03.276996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.608 [2024-10-01 15:59:03.277002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.608 [2024-10-01 15:59:03.277008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.608 [2024-10-01 15:59:03.277022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.609 [2024-10-01 15:59:03.277028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.609 [2024-10-01 15:59:03.287811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.609 [2024-10-01 15:59:03.287832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.609 [2024-10-01 15:59:03.288002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.609 [2024-10-01 15:59:03.288014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.609 [2024-10-01 15:59:03.288022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.609 [2024-10-01 15:59:03.288173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.609 [2024-10-01 15:59:03.288183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.609 [2024-10-01 15:59:03.288189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.609 [2024-10-01 15:59:03.288902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.609 [2024-10-01 15:59:03.288918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.609 [2024-10-01 15:59:03.289695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.609 [2024-10-01 15:59:03.289708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.609 [2024-10-01 15:59:03.289715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.609 [2024-10-01 15:59:03.289724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.609 [2024-10-01 15:59:03.289730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.609 [2024-10-01 15:59:03.289736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.609 [2024-10-01 15:59:03.290031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.609 [2024-10-01 15:59:03.290041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.609 [2024-10-01 15:59:03.299436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.609 [2024-10-01 15:59:03.299457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.609 [2024-10-01 15:59:03.299607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.609 [2024-10-01 15:59:03.299621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.609 [2024-10-01 15:59:03.299628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.609 [2024-10-01 15:59:03.299722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.609 [2024-10-01 15:59:03.299731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.609 [2024-10-01 15:59:03.299738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.609 [2024-10-01 15:59:03.299749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.609 [2024-10-01 15:59:03.299758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.609 [2024-10-01 15:59:03.299768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.609 [2024-10-01 15:59:03.299775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.609 [2024-10-01 15:59:03.299781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.609 [2024-10-01 15:59:03.299790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.609 [2024-10-01 15:59:03.299796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.609 [2024-10-01 15:59:03.299802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.609 [2024-10-01 15:59:03.299815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.609 [2024-10-01 15:59:03.299823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.609 [2024-10-01 15:59:03.310595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.609 [2024-10-01 15:59:03.310619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.609 [2024-10-01 15:59:03.311135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.609 [2024-10-01 15:59:03.311157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.609 [2024-10-01 15:59:03.311165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.609 [2024-10-01 15:59:03.311239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.609 [2024-10-01 15:59:03.311248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.609 [2024-10-01 15:59:03.311255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.609 [2024-10-01 15:59:03.311421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.609 [2024-10-01 15:59:03.311434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.609 [2024-10-01 15:59:03.311583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.609 [2024-10-01 15:59:03.311593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.609 [2024-10-01 15:59:03.311600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.609 [2024-10-01 15:59:03.311609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.609 [2024-10-01 15:59:03.311615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.609 [2024-10-01 15:59:03.311621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.609 [2024-10-01 15:59:03.311650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.609 [2024-10-01 15:59:03.311658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.609 [2024-10-01 15:59:03.321912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.609 [2024-10-01 15:59:03.321934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.609 [2024-10-01 15:59:03.322435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.609 [2024-10-01 15:59:03.322453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.609 [2024-10-01 15:59:03.322461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.609 [2024-10-01 15:59:03.322690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.609 [2024-10-01 15:59:03.322701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.609 [2024-10-01 15:59:03.322708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.609 [2024-10-01 15:59:03.323183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.609 [2024-10-01 15:59:03.323199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.609 [2024-10-01 15:59:03.323367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.609 [2024-10-01 15:59:03.323377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.609 [2024-10-01 15:59:03.323384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.609 [2024-10-01 15:59:03.323394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.609 [2024-10-01 15:59:03.323399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.609 [2024-10-01 15:59:03.323409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.609 [2024-10-01 15:59:03.323519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.609 [2024-10-01 15:59:03.323529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.609 [2024-10-01 15:59:03.333337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.609 [2024-10-01 15:59:03.333358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.609 [2024-10-01 15:59:03.333473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.609 [2024-10-01 15:59:03.333486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.609 [2024-10-01 15:59:03.333493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.609 [2024-10-01 15:59:03.333645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.609 [2024-10-01 15:59:03.333655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.609 [2024-10-01 15:59:03.333661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.609 [2024-10-01 15:59:03.334034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.609 [2024-10-01 15:59:03.334050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.609 [2024-10-01 15:59:03.334321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.609 [2024-10-01 15:59:03.334331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.609 [2024-10-01 15:59:03.334338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.609 [2024-10-01 15:59:03.334347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.609 [2024-10-01 15:59:03.334354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.609 [2024-10-01 15:59:03.334360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.609 [2024-10-01 15:59:03.334514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.609 [2024-10-01 15:59:03.334523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.609 [2024-10-01 15:59:03.344901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.609 [2024-10-01 15:59:03.344923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.610 [2024-10-01 15:59:03.345185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.610 [2024-10-01 15:59:03.345202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.610 [2024-10-01 15:59:03.345209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.610 [2024-10-01 15:59:03.345387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.610 [2024-10-01 15:59:03.345401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.610 [2024-10-01 15:59:03.345408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.610 [2024-10-01 15:59:03.345659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.610 [2024-10-01 15:59:03.345677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.610 [2024-10-01 15:59:03.345714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.610 [2024-10-01 15:59:03.345721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.610 [2024-10-01 15:59:03.345728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.610 [2024-10-01 15:59:03.345736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.610 [2024-10-01 15:59:03.345743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.610 [2024-10-01 15:59:03.345749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.610 [2024-10-01 15:59:03.345884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.610 [2024-10-01 15:59:03.345893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.610 [2024-10-01 15:59:03.356522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.610 [2024-10-01 15:59:03.356543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.610 [2024-10-01 15:59:03.356713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.610 [2024-10-01 15:59:03.356725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.610 [2024-10-01 15:59:03.356733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.610 [2024-10-01 15:59:03.356832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.610 [2024-10-01 15:59:03.356842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.610 [2024-10-01 15:59:03.356850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.610 [2024-10-01 15:59:03.356867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.610 [2024-10-01 15:59:03.356876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.610 [2024-10-01 15:59:03.356886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.610 [2024-10-01 15:59:03.356892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.610 [2024-10-01 15:59:03.356898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.610 [2024-10-01 15:59:03.356906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.610 [2024-10-01 15:59:03.356912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.610 [2024-10-01 15:59:03.356918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.610 [2024-10-01 15:59:03.356931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.610 [2024-10-01 15:59:03.356939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.610 [2024-10-01 15:59:03.368585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.610 [2024-10-01 15:59:03.368606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.610 [2024-10-01 15:59:03.368726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.610 [2024-10-01 15:59:03.368739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.610 [2024-10-01 15:59:03.368749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.610 [2024-10-01 15:59:03.368835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.610 [2024-10-01 15:59:03.368844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.610 [2024-10-01 15:59:03.368851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.610 [2024-10-01 15:59:03.368868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.610 [2024-10-01 15:59:03.368878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.610 [2024-10-01 15:59:03.368887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.610 [2024-10-01 15:59:03.368893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.610 [2024-10-01 15:59:03.368899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.610 [2024-10-01 15:59:03.368908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.610 [2024-10-01 15:59:03.368913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.610 [2024-10-01 15:59:03.368919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.610 [2024-10-01 15:59:03.368933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.610 [2024-10-01 15:59:03.368940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.610 [2024-10-01 15:59:03.379993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.610 [2024-10-01 15:59:03.380013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.610 [2024-10-01 15:59:03.380176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.610 [2024-10-01 15:59:03.380189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.610 [2024-10-01 15:59:03.380197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.610 [2024-10-01 15:59:03.380320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.610 [2024-10-01 15:59:03.380330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.610 [2024-10-01 15:59:03.380337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.610 [2024-10-01 15:59:03.380349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.610 [2024-10-01 15:59:03.380358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.610 [2024-10-01 15:59:03.380368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.610 [2024-10-01 15:59:03.380374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.610 [2024-10-01 15:59:03.380380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.610 [2024-10-01 15:59:03.380389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.610 [2024-10-01 15:59:03.380395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.610 [2024-10-01 15:59:03.380401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.610 [2024-10-01 15:59:03.380420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.610 [2024-10-01 15:59:03.380426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.610 [2024-10-01 15:59:03.391378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.610 [2024-10-01 15:59:03.391399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.610 [2024-10-01 15:59:03.391516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.610 [2024-10-01 15:59:03.391529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.610 [2024-10-01 15:59:03.391536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.610 [2024-10-01 15:59:03.391635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.610 [2024-10-01 15:59:03.391644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.610 [2024-10-01 15:59:03.391652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.610 [2024-10-01 15:59:03.391663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.610 [2024-10-01 15:59:03.391673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.610 [2024-10-01 15:59:03.391682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.610 [2024-10-01 15:59:03.391689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.611 [2024-10-01 15:59:03.391695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.611 [2024-10-01 15:59:03.391704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.611 [2024-10-01 15:59:03.391710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.611 [2024-10-01 15:59:03.391716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.611 [2024-10-01 15:59:03.391729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.611 [2024-10-01 15:59:03.391735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.611 [2024-10-01 15:59:03.401542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.611 [2024-10-01 15:59:03.401562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.611 [2024-10-01 15:59:03.401673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.611 [2024-10-01 15:59:03.401686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.611 [2024-10-01 15:59:03.401693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.611 [2024-10-01 15:59:03.401887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.611 [2024-10-01 15:59:03.401898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.611 [2024-10-01 15:59:03.401904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.611 [2024-10-01 15:59:03.401916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.611 [2024-10-01 15:59:03.401925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.611 [2024-10-01 15:59:03.401939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.611 [2024-10-01 15:59:03.401945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.611 [2024-10-01 15:59:03.401951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.611 [2024-10-01 15:59:03.401960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.611 [2024-10-01 15:59:03.401966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.611 [2024-10-01 15:59:03.401972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.611 [2024-10-01 15:59:03.401985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.611 [2024-10-01 15:59:03.401992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.611 [2024-10-01 15:59:03.411620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.611 [2024-10-01 15:59:03.411650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.611 [2024-10-01 15:59:03.411808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.611 [2024-10-01 15:59:03.411820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.611 [2024-10-01 15:59:03.411827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.611 [2024-10-01 15:59:03.411951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.611 [2024-10-01 15:59:03.411962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.611 [2024-10-01 15:59:03.411970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.611 [2024-10-01 15:59:03.411978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.611 [2024-10-01 15:59:03.411989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.611 [2024-10-01 15:59:03.411997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.611 [2024-10-01 15:59:03.412002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.611 [2024-10-01 15:59:03.412009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.611 [2024-10-01 15:59:03.412021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.611 [2024-10-01 15:59:03.412028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.611 [2024-10-01 15:59:03.412034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.611 [2024-10-01 15:59:03.412040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.611 [2024-10-01 15:59:03.412051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.611 [2024-10-01 15:59:03.422125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.611 [2024-10-01 15:59:03.422148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.611 [2024-10-01 15:59:03.422310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.611 [2024-10-01 15:59:03.422323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.611 [2024-10-01 15:59:03.422330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.611 [2024-10-01 15:59:03.422478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.611 [2024-10-01 15:59:03.422488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.611 [2024-10-01 15:59:03.422496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.611 [2024-10-01 15:59:03.422508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.611 [2024-10-01 15:59:03.422517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.611 [2024-10-01 15:59:03.422527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.611 [2024-10-01 15:59:03.422533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.611 [2024-10-01 15:59:03.422539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.611 [2024-10-01 15:59:03.422548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.611 [2024-10-01 15:59:03.422554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.611 [2024-10-01 15:59:03.422560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.611 [2024-10-01 15:59:03.422573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.611 [2024-10-01 15:59:03.422580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.611 [2024-10-01 15:59:03.433291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.611 [2024-10-01 15:59:03.433313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.611 [2024-10-01 15:59:03.433482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.611 [2024-10-01 15:59:03.433494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.611 [2024-10-01 15:59:03.433502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.611 [2024-10-01 15:59:03.433593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.611 [2024-10-01 15:59:03.433602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.611 [2024-10-01 15:59:03.433609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.611 [2024-10-01 15:59:03.433620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.611 [2024-10-01 15:59:03.433629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.611 [2024-10-01 15:59:03.433639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.611 [2024-10-01 15:59:03.433646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.611 [2024-10-01 15:59:03.433653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.611 [2024-10-01 15:59:03.433661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.611 [2024-10-01 15:59:03.433667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.611 [2024-10-01 15:59:03.433674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.611 [2024-10-01 15:59:03.433688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.611 [2024-10-01 15:59:03.433698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.611 [2024-10-01 15:59:03.444351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.611 [2024-10-01 15:59:03.444373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.611 [2024-10-01 15:59:03.444545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.611 [2024-10-01 15:59:03.444558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.611 [2024-10-01 15:59:03.444566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.611 [2024-10-01 15:59:03.444657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.611 [2024-10-01 15:59:03.444666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.611 [2024-10-01 15:59:03.444673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.611 [2024-10-01 15:59:03.444685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.611 [2024-10-01 15:59:03.444695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.611 [2024-10-01 15:59:03.444704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.612 [2024-10-01 15:59:03.444711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.612 [2024-10-01 15:59:03.444717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.612 [2024-10-01 15:59:03.444726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.612 [2024-10-01 15:59:03.444732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.612 [2024-10-01 15:59:03.444738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.612 [2024-10-01 15:59:03.444751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.612 [2024-10-01 15:59:03.444758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.612 [2024-10-01 15:59:03.454431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.612 [2024-10-01 15:59:03.454461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.612 [2024-10-01 15:59:03.454553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.612 [2024-10-01 15:59:03.454565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.612 [2024-10-01 15:59:03.454572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.612 [2024-10-01 15:59:03.454661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.612 [2024-10-01 15:59:03.454671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.612 [2024-10-01 15:59:03.454677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.612 [2024-10-01 15:59:03.454685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.612 [2024-10-01 15:59:03.454697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.612 [2024-10-01 15:59:03.454705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.612 [2024-10-01 15:59:03.454714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.612 [2024-10-01 15:59:03.454721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.612 [2024-10-01 15:59:03.454733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.612 [2024-10-01 15:59:03.454740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.612 [2024-10-01 15:59:03.454746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.612 [2024-10-01 15:59:03.454752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.612 [2024-10-01 15:59:03.454763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.612 [2024-10-01 15:59:03.464496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.612 [2024-10-01 15:59:03.464747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.612 [2024-10-01 15:59:03.464762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.612 [2024-10-01 15:59:03.464770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.612 [2024-10-01 15:59:03.464790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.612 [2024-10-01 15:59:03.464803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.612 [2024-10-01 15:59:03.464816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.612 [2024-10-01 15:59:03.464822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.612 [2024-10-01 15:59:03.464829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.612 [2024-10-01 15:59:03.464841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.612 [2024-10-01 15:59:03.465020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.612 [2024-10-01 15:59:03.465030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.612 [2024-10-01 15:59:03.465038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.612 [2024-10-01 15:59:03.465049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.612 [2024-10-01 15:59:03.465059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.612 [2024-10-01 15:59:03.465065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.612 [2024-10-01 15:59:03.465071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.612 [2024-10-01 15:59:03.465083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.612 [2024-10-01 15:59:03.475250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.612 [2024-10-01 15:59:03.475299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.612 [2024-10-01 15:59:03.475485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.612 [2024-10-01 15:59:03.475498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.612 [2024-10-01 15:59:03.475505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.612 [2024-10-01 15:59:03.475770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.612 [2024-10-01 15:59:03.475788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.612 [2024-10-01 15:59:03.475795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.612 [2024-10-01 15:59:03.475804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.612 [2024-10-01 15:59:03.475833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.612 [2024-10-01 15:59:03.475841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.612 [2024-10-01 15:59:03.475847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.612 [2024-10-01 15:59:03.475853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.612 [2024-10-01 15:59:03.475873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.612 [2024-10-01 15:59:03.475881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.612 [2024-10-01 15:59:03.475886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.612 [2024-10-01 15:59:03.475892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.612 [2024-10-01 15:59:03.475904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.612 [2024-10-01 15:59:03.485362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.612 [2024-10-01 15:59:03.485392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.612 [2024-10-01 15:59:03.485629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.612 [2024-10-01 15:59:03.485644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.612 [2024-10-01 15:59:03.485651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.612 [2024-10-01 15:59:03.485961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.612 [2024-10-01 15:59:03.485977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.612 [2024-10-01 15:59:03.485984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.612 [2024-10-01 15:59:03.485993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.612 [2024-10-01 15:59:03.486136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.612 [2024-10-01 15:59:03.486147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.612 [2024-10-01 15:59:03.486153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.612 [2024-10-01 15:59:03.486159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.612 [2024-10-01 15:59:03.486189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.612 [2024-10-01 15:59:03.486196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.612 [2024-10-01 15:59:03.486202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.612 [2024-10-01 15:59:03.486208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.612 [2024-10-01 15:59:03.486220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.612 [2024-10-01 15:59:03.495922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.612 [2024-10-01 15:59:03.495942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.612 [2024-10-01 15:59:03.496034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.612 [2024-10-01 15:59:03.496046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.612 [2024-10-01 15:59:03.496053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.612 [2024-10-01 15:59:03.496225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.612 [2024-10-01 15:59:03.496235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.612 [2024-10-01 15:59:03.496241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.612 [2024-10-01 15:59:03.496253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.612 [2024-10-01 15:59:03.496262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.612 [2024-10-01 15:59:03.496272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.612 [2024-10-01 15:59:03.496278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.612 [2024-10-01 15:59:03.496285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.613 [2024-10-01 15:59:03.496293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.613 [2024-10-01 15:59:03.496298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.613 [2024-10-01 15:59:03.496305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.613 [2024-10-01 15:59:03.496318] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.613 [2024-10-01 15:59:03.496325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.613 [2024-10-01 15:59:03.507337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.613 [2024-10-01 15:59:03.507358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.613 [2024-10-01 15:59:03.507568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.613 [2024-10-01 15:59:03.507580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.613 [2024-10-01 15:59:03.507588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.613 [2024-10-01 15:59:03.507710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.613 [2024-10-01 15:59:03.507721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.613 [2024-10-01 15:59:03.507728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.613 [2024-10-01 15:59:03.507739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.613 [2024-10-01 15:59:03.507748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.613 [2024-10-01 15:59:03.507758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.613 [2024-10-01 15:59:03.507764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.613 [2024-10-01 15:59:03.507774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.613 [2024-10-01 15:59:03.507783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.613 [2024-10-01 15:59:03.507789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.613 [2024-10-01 15:59:03.507795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.613 [2024-10-01 15:59:03.507808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.613 [2024-10-01 15:59:03.507815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.613 [2024-10-01 15:59:03.519854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.613 [2024-10-01 15:59:03.519888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.613 [2024-10-01 15:59:03.520130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.613 [2024-10-01 15:59:03.520148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.613 [2024-10-01 15:59:03.520155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.613 [2024-10-01 15:59:03.520297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.613 [2024-10-01 15:59:03.520307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.613 [2024-10-01 15:59:03.520314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.613 [2024-10-01 15:59:03.520325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.613 [2024-10-01 15:59:03.520334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.613 [2024-10-01 15:59:03.520344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.613 [2024-10-01 15:59:03.520350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.613 [2024-10-01 15:59:03.520357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.613 [2024-10-01 15:59:03.520365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.613 [2024-10-01 15:59:03.520371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.613 [2024-10-01 15:59:03.520377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.613 [2024-10-01 15:59:03.520391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.613 [2024-10-01 15:59:03.520398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.613 [2024-10-01 15:59:03.531700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.613 [2024-10-01 15:59:03.531722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.613 [2024-10-01 15:59:03.532092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.613 [2024-10-01 15:59:03.532109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.613 [2024-10-01 15:59:03.532117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.613 [2024-10-01 15:59:03.532263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.613 [2024-10-01 15:59:03.532272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.613 [2024-10-01 15:59:03.532283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.613 [2024-10-01 15:59:03.532427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.613 [2024-10-01 15:59:03.532440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.613 [2024-10-01 15:59:03.532588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.613 [2024-10-01 15:59:03.532599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.613 [2024-10-01 15:59:03.532606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.613 [2024-10-01 15:59:03.532615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.613 [2024-10-01 15:59:03.532621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.613 [2024-10-01 15:59:03.532628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.613 [2024-10-01 15:59:03.532656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.613 [2024-10-01 15:59:03.532664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.613 [2024-10-01 15:59:03.542517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.613 [2024-10-01 15:59:03.542537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.613 [2024-10-01 15:59:03.542718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.613 [2024-10-01 15:59:03.542731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.613 [2024-10-01 15:59:03.542738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.613 [2024-10-01 15:59:03.542929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.613 [2024-10-01 15:59:03.542940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.613 [2024-10-01 15:59:03.542946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.613 [2024-10-01 15:59:03.542957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.613 [2024-10-01 15:59:03.542967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.613 [2024-10-01 15:59:03.542976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.613 [2024-10-01 15:59:03.542983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.613 [2024-10-01 15:59:03.542989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.613 [2024-10-01 15:59:03.542998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.613 [2024-10-01 15:59:03.543004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.613 [2024-10-01 15:59:03.543010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.613 [2024-10-01 15:59:03.543023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.613 [2024-10-01 15:59:03.543030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.613 [2024-10-01 15:59:03.555474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.613 [2024-10-01 15:59:03.555500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.613 [2024-10-01 15:59:03.555761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.613 [2024-10-01 15:59:03.555777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.613 [2024-10-01 15:59:03.555784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.613 [2024-10-01 15:59:03.555953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.613 [2024-10-01 15:59:03.555963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.613 [2024-10-01 15:59:03.555970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.613 [2024-10-01 15:59:03.556253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.613 [2024-10-01 15:59:03.556268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.613 [2024-10-01 15:59:03.556306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.613 [2024-10-01 15:59:03.556314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.613 [2024-10-01 15:59:03.556320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.613 [2024-10-01 15:59:03.556329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.613 [2024-10-01 15:59:03.556336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.614 [2024-10-01 15:59:03.556343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.614 [2024-10-01 15:59:03.556471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.614 [2024-10-01 15:59:03.556481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.614 [2024-10-01 15:59:03.566332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.614 [2024-10-01 15:59:03.566353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.614 [2024-10-01 15:59:03.566560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.614 [2024-10-01 15:59:03.566573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.614 [2024-10-01 15:59:03.566580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.614 [2024-10-01 15:59:03.566796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.614 [2024-10-01 15:59:03.566806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.614 [2024-10-01 15:59:03.566812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.614 [2024-10-01 15:59:03.566824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.614 [2024-10-01 15:59:03.566833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.614 [2024-10-01 15:59:03.566843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.614 [2024-10-01 15:59:03.566849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.614 [2024-10-01 15:59:03.566855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.614 [2024-10-01 15:59:03.566869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.614 [2024-10-01 15:59:03.566879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.614 [2024-10-01 15:59:03.566885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.614 [2024-10-01 15:59:03.566898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.614 [2024-10-01 15:59:03.566905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.614 [2024-10-01 15:59:03.579124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.614 [2024-10-01 15:59:03.579146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.614 [2024-10-01 15:59:03.579457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.614 [2024-10-01 15:59:03.579473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.614 [2024-10-01 15:59:03.579480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.614 [2024-10-01 15:59:03.579692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.614 [2024-10-01 15:59:03.579703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.614 [2024-10-01 15:59:03.579709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.614 [2024-10-01 15:59:03.579998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.614 [2024-10-01 15:59:03.580013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.614 [2024-10-01 15:59:03.580163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.614 [2024-10-01 15:59:03.580173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.614 [2024-10-01 15:59:03.580180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.614 [2024-10-01 15:59:03.580190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.614 [2024-10-01 15:59:03.580196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.614 [2024-10-01 15:59:03.580202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.614 [2024-10-01 15:59:03.580232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.614 [2024-10-01 15:59:03.580240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.614 [2024-10-01 15:59:03.590345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.614 [2024-10-01 15:59:03.590366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.614 [2024-10-01 15:59:03.590597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.614 [2024-10-01 15:59:03.590609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.614 [2024-10-01 15:59:03.590616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.614 [2024-10-01 15:59:03.590755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.614 [2024-10-01 15:59:03.590765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.614 [2024-10-01 15:59:03.590772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.614 [2024-10-01 15:59:03.590787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.614 [2024-10-01 15:59:03.590796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.614 [2024-10-01 15:59:03.590806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.614 [2024-10-01 15:59:03.590812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.614 [2024-10-01 15:59:03.590818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.614 [2024-10-01 15:59:03.590826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.614 [2024-10-01 15:59:03.590832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.614 [2024-10-01 15:59:03.590838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.614 [2024-10-01 15:59:03.590852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.614 [2024-10-01 15:59:03.590858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.614 [2024-10-01 15:59:03.601344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.614 [2024-10-01 15:59:03.601364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.614 [2024-10-01 15:59:03.601575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.614 [2024-10-01 15:59:03.601588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.614 [2024-10-01 15:59:03.601595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.614 [2024-10-01 15:59:03.601740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.614 [2024-10-01 15:59:03.601750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.614 [2024-10-01 15:59:03.601756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.614 [2024-10-01 15:59:03.601768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.614 [2024-10-01 15:59:03.601777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.614 [2024-10-01 15:59:03.601786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.614 [2024-10-01 15:59:03.601792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.614 [2024-10-01 15:59:03.601799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.614 [2024-10-01 15:59:03.601807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.614 [2024-10-01 15:59:03.601813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.614 [2024-10-01 15:59:03.601821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.614 [2024-10-01 15:59:03.601834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.614 [2024-10-01 15:59:03.601841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.614 [2024-10-01 15:59:03.613048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.614 [2024-10-01 15:59:03.613069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.614 [2024-10-01 15:59:03.613304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.614 [2024-10-01 15:59:03.613329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.614 [2024-10-01 15:59:03.613337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.614 [2024-10-01 15:59:03.613478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.614 [2024-10-01 15:59:03.613487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.614 [2024-10-01 15:59:03.613494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.614 [2024-10-01 15:59:03.613505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.614 [2024-10-01 15:59:03.613515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.614 [2024-10-01 15:59:03.613525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.614 [2024-10-01 15:59:03.613530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.614 [2024-10-01 15:59:03.613537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.614 [2024-10-01 15:59:03.613545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.614 [2024-10-01 15:59:03.613551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.614 [2024-10-01 15:59:03.613557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.614 [2024-10-01 15:59:03.613570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.614 [2024-10-01 15:59:03.613577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.615 [2024-10-01 15:59:03.623761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.615 [2024-10-01 15:59:03.623782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.615 [2024-10-01 15:59:03.623969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.615 [2024-10-01 15:59:03.623983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.615 [2024-10-01 15:59:03.623991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.615 [2024-10-01 15:59:03.624209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.615 [2024-10-01 15:59:03.624220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.615 [2024-10-01 15:59:03.624226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.615 [2024-10-01 15:59:03.624672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.615 [2024-10-01 15:59:03.624687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.615 [2024-10-01 15:59:03.624856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.615 [2024-10-01 15:59:03.624871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.615 [2024-10-01 15:59:03.624878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.615 [2024-10-01 15:59:03.624888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.615 [2024-10-01 15:59:03.624894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.615 [2024-10-01 15:59:03.624903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.615 [2024-10-01 15:59:03.625046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.615 [2024-10-01 15:59:03.625055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.615 [2024-10-01 15:59:03.633891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.615 [2024-10-01 15:59:03.633911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.615 [2024-10-01 15:59:03.634084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.615 [2024-10-01 15:59:03.634096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.615 [2024-10-01 15:59:03.634104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.615 [2024-10-01 15:59:03.634319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.615 [2024-10-01 15:59:03.634330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.615 [2024-10-01 15:59:03.634338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.615 [2024-10-01 15:59:03.634350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.615 [2024-10-01 15:59:03.634359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.615 [2024-10-01 15:59:03.634845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.615 [2024-10-01 15:59:03.634855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.615 [2024-10-01 15:59:03.634869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.615 [2024-10-01 15:59:03.634880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.615 [2024-10-01 15:59:03.634886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.615 [2024-10-01 15:59:03.634892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.615 [2024-10-01 15:59:03.635494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.615 [2024-10-01 15:59:03.635506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.615 [2024-10-01 15:59:03.646100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.615 [2024-10-01 15:59:03.646120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.615 [2024-10-01 15:59:03.646282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.615 [2024-10-01 15:59:03.646294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.615 [2024-10-01 15:59:03.646301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.615 [2024-10-01 15:59:03.646461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.615 [2024-10-01 15:59:03.646471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.615 [2024-10-01 15:59:03.646477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.615 [2024-10-01 15:59:03.646489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.615 [2024-10-01 15:59:03.646502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.615 [2024-10-01 15:59:03.646512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.615 [2024-10-01 15:59:03.646518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.615 [2024-10-01 15:59:03.646524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.615 [2024-10-01 15:59:03.646532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.615 [2024-10-01 15:59:03.646538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.615 [2024-10-01 15:59:03.646544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.615 [2024-10-01 15:59:03.646994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.615 [2024-10-01 15:59:03.647005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.615 [2024-10-01 15:59:03.656695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.615 [2024-10-01 15:59:03.656716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.615 [2024-10-01 15:59:03.656882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.615 [2024-10-01 15:59:03.656896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.615 [2024-10-01 15:59:03.656903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.615 [2024-10-01 15:59:03.657066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.615 [2024-10-01 15:59:03.657075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.615 [2024-10-01 15:59:03.657082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.615 [2024-10-01 15:59:03.657094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.615 [2024-10-01 15:59:03.657103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.615 [2024-10-01 15:59:03.657113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.615 [2024-10-01 15:59:03.657119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.615 [2024-10-01 15:59:03.657126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.615 [2024-10-01 15:59:03.657134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.616 [2024-10-01 15:59:03.657140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.616 [2024-10-01 15:59:03.657146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.616 [2024-10-01 15:59:03.657160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.616 [2024-10-01 15:59:03.657166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.616 [2024-10-01 15:59:03.668747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.616 [2024-10-01 15:59:03.668767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.616 [2024-10-01 15:59:03.669003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.616 [2024-10-01 15:59:03.669017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.616 [2024-10-01 15:59:03.669030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.616 [2024-10-01 15:59:03.669174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.616 [2024-10-01 15:59:03.669184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.616 [2024-10-01 15:59:03.669190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.616 [2024-10-01 15:59:03.669202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.616 [2024-10-01 15:59:03.669211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.616 [2024-10-01 15:59:03.669229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.616 [2024-10-01 15:59:03.669236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.616 [2024-10-01 15:59:03.669242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.616 [2024-10-01 15:59:03.669251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.616 [2024-10-01 15:59:03.669256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.616 [2024-10-01 15:59:03.669262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.616 [2024-10-01 15:59:03.669276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.616 [2024-10-01 15:59:03.669282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.616 [2024-10-01 15:59:03.680921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.616 [2024-10-01 15:59:03.680944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.616 [2024-10-01 15:59:03.681268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.616 [2024-10-01 15:59:03.681284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.616 [2024-10-01 15:59:03.681292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.616 [2024-10-01 15:59:03.681368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.616 [2024-10-01 15:59:03.681378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.616 [2024-10-01 15:59:03.681384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.616 [2024-10-01 15:59:03.681527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.616 [2024-10-01 15:59:03.681539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.616 [2024-10-01 15:59:03.681690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.616 [2024-10-01 15:59:03.681700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.616 [2024-10-01 15:59:03.681707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.616 [2024-10-01 15:59:03.681715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.616 [2024-10-01 15:59:03.681721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.616 [2024-10-01 15:59:03.681728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.616 [2024-10-01 15:59:03.681761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.616 [2024-10-01 15:59:03.681768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.616 [2024-10-01 15:59:03.691630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.616 [2024-10-01 15:59:03.691650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.616 [2024-10-01 15:59:03.691801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.616 [2024-10-01 15:59:03.691813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.616 [2024-10-01 15:59:03.691820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.616 [2024-10-01 15:59:03.691958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.616 [2024-10-01 15:59:03.691968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.616 [2024-10-01 15:59:03.691975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.616 [2024-10-01 15:59:03.691986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.616 [2024-10-01 15:59:03.691995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.616 [2024-10-01 15:59:03.692005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.616 [2024-10-01 15:59:03.692010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.616 [2024-10-01 15:59:03.692016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.616 [2024-10-01 15:59:03.692025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.616 [2024-10-01 15:59:03.692031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.616 [2024-10-01 15:59:03.692037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.616 [2024-10-01 15:59:03.692050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.616 [2024-10-01 15:59:03.692056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.616 [2024-10-01 15:59:03.702709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.616 [2024-10-01 15:59:03.702729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.616 [2024-10-01 15:59:03.702938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.616 [2024-10-01 15:59:03.702952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.616 [2024-10-01 15:59:03.702959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.616 [2024-10-01 15:59:03.703167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.616 [2024-10-01 15:59:03.703177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.616 [2024-10-01 15:59:03.703183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.616 [2024-10-01 15:59:03.703195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.616 [2024-10-01 15:59:03.703204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.616 [2024-10-01 15:59:03.703217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.616 [2024-10-01 15:59:03.703224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.616 [2024-10-01 15:59:03.703230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.616 [2024-10-01 15:59:03.703238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.616 [2024-10-01 15:59:03.703244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.616 [2024-10-01 15:59:03.703250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.616 [2024-10-01 15:59:03.703263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.616 [2024-10-01 15:59:03.703270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.616 [2024-10-01 15:59:03.713806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.616 [2024-10-01 15:59:03.713827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.616 [2024-10-01 15:59:03.714115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.616 [2024-10-01 15:59:03.714130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.616 [2024-10-01 15:59:03.714137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.616 [2024-10-01 15:59:03.714278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.616 [2024-10-01 15:59:03.714288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.616 [2024-10-01 15:59:03.714294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.616 [2024-10-01 15:59:03.714532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.616 [2024-10-01 15:59:03.714545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.616 [2024-10-01 15:59:03.714693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.616 [2024-10-01 15:59:03.714703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.616 [2024-10-01 15:59:03.714709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.616 [2024-10-01 15:59:03.714718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.616 [2024-10-01 15:59:03.714724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.617 [2024-10-01 15:59:03.714730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.617 [2024-10-01 15:59:03.714770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.617 [2024-10-01 15:59:03.714778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.617 [2024-10-01 15:59:03.724251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.617 [2024-10-01 15:59:03.724271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.617 [2024-10-01 15:59:03.724437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.617 [2024-10-01 15:59:03.724449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.617 [2024-10-01 15:59:03.724456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.617 [2024-10-01 15:59:03.724547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.617 [2024-10-01 15:59:03.724557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.617 [2024-10-01 15:59:03.724563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.617 [2024-10-01 15:59:03.724575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.617 [2024-10-01 15:59:03.724584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.617 [2024-10-01 15:59:03.724593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.617 [2024-10-01 15:59:03.724600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.617 [2024-10-01 15:59:03.724606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.617 [2024-10-01 15:59:03.724614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.617 [2024-10-01 15:59:03.724619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.617 [2024-10-01 15:59:03.724625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.617 [2024-10-01 15:59:03.724639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.617 [2024-10-01 15:59:03.724646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.617 [2024-10-01 15:59:03.737277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.617 [2024-10-01 15:59:03.737300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.617 [2024-10-01 15:59:03.737630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.617 [2024-10-01 15:59:03.737647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.617 [2024-10-01 15:59:03.737654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.617 [2024-10-01 15:59:03.737843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.617 [2024-10-01 15:59:03.737854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.617 [2024-10-01 15:59:03.737861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.617 [2024-10-01 15:59:03.738058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.617 [2024-10-01 15:59:03.738073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.617 [2024-10-01 15:59:03.738118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.617 [2024-10-01 15:59:03.738127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.617 [2024-10-01 15:59:03.738134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.617 [2024-10-01 15:59:03.738143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.617 [2024-10-01 15:59:03.738149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.617 [2024-10-01 15:59:03.738156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.617 [2024-10-01 15:59:03.738177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.617 [2024-10-01 15:59:03.738189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.617 [2024-10-01 15:59:03.747544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.617 [2024-10-01 15:59:03.747565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.617 [2024-10-01 15:59:03.747744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.617 [2024-10-01 15:59:03.747757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.617 [2024-10-01 15:59:03.747764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.617 [2024-10-01 15:59:03.747983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.617 [2024-10-01 15:59:03.747993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.617 [2024-10-01 15:59:03.748000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.617 [2024-10-01 15:59:03.748380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.617 [2024-10-01 15:59:03.748394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.617 [2024-10-01 15:59:03.748552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.617 [2024-10-01 15:59:03.748562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.617 [2024-10-01 15:59:03.748568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.617 [2024-10-01 15:59:03.748577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.617 [2024-10-01 15:59:03.748583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.617 [2024-10-01 15:59:03.748589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.617 [2024-10-01 15:59:03.748732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.617 [2024-10-01 15:59:03.748741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.617 [2024-10-01 15:59:03.758939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.617 [2024-10-01 15:59:03.758960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.617 [2024-10-01 15:59:03.759132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.617 [2024-10-01 15:59:03.759145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.617 [2024-10-01 15:59:03.759153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.617 [2024-10-01 15:59:03.759319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.617 [2024-10-01 15:59:03.759328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.617 [2024-10-01 15:59:03.759335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.617 [2024-10-01 15:59:03.759510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.617 [2024-10-01 15:59:03.759522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.617 [2024-10-01 15:59:03.759660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.617 [2024-10-01 15:59:03.759674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.617 [2024-10-01 15:59:03.759681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.617 [2024-10-01 15:59:03.759690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.617 [2024-10-01 15:59:03.759696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.617 [2024-10-01 15:59:03.759702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.617 [2024-10-01 15:59:03.759733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.617 [2024-10-01 15:59:03.759740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.617 [2024-10-01 15:59:03.770315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.617 [2024-10-01 15:59:03.770336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.617 [2024-10-01 15:59:03.770523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.617 [2024-10-01 15:59:03.770536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.617 [2024-10-01 15:59:03.770543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.617 [2024-10-01 15:59:03.770732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.617 [2024-10-01 15:59:03.770742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.618 [2024-10-01 15:59:03.770749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.618 [2024-10-01 15:59:03.771003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.618 [2024-10-01 15:59:03.771017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.618 [2024-10-01 15:59:03.771263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.618 [2024-10-01 15:59:03.771273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.618 [2024-10-01 15:59:03.771279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.618 [2024-10-01 15:59:03.771288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.618 [2024-10-01 15:59:03.771294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.618 [2024-10-01 15:59:03.771300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.618 [2024-10-01 15:59:03.771450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.618 [2024-10-01 15:59:03.771459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.618 [2024-10-01 15:59:03.780996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.618 [2024-10-01 15:59:03.781016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.618 [2024-10-01 15:59:03.781176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.618 [2024-10-01 15:59:03.781189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.618 [2024-10-01 15:59:03.781195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.618 [2024-10-01 15:59:03.781419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.618 [2024-10-01 15:59:03.781433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.618 [2024-10-01 15:59:03.781439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.618 [2024-10-01 15:59:03.781451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.618 [2024-10-01 15:59:03.781460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.618 [2024-10-01 15:59:03.781470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.618 [2024-10-01 15:59:03.781476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.618 [2024-10-01 15:59:03.781482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.618 [2024-10-01 15:59:03.781490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.618 [2024-10-01 15:59:03.781496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.618 [2024-10-01 15:59:03.781502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.618 [2024-10-01 15:59:03.781515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.618 [2024-10-01 15:59:03.781521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.618 [2024-10-01 15:59:03.793470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.618 [2024-10-01 15:59:03.793491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.618 [2024-10-01 15:59:03.793662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.618 [2024-10-01 15:59:03.793675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.618 [2024-10-01 15:59:03.793682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.618 [2024-10-01 15:59:03.793851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.618 [2024-10-01 15:59:03.793861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.618 [2024-10-01 15:59:03.793874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.618 [2024-10-01 15:59:03.793886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.618 [2024-10-01 15:59:03.793895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.618 [2024-10-01 15:59:03.793905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.618 [2024-10-01 15:59:03.793910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.618 [2024-10-01 15:59:03.793917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.618 [2024-10-01 15:59:03.793925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.618 [2024-10-01 15:59:03.793931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.618 [2024-10-01 15:59:03.793937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.618 [2024-10-01 15:59:03.793950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.618 [2024-10-01 15:59:03.793956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.618 [2024-10-01 15:59:03.805112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.618 [2024-10-01 15:59:03.805147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.618 [2024-10-01 15:59:03.805708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.618 [2024-10-01 15:59:03.805725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.618 [2024-10-01 15:59:03.805733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.618 [2024-10-01 15:59:03.805874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.618 [2024-10-01 15:59:03.805885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.618 [2024-10-01 15:59:03.805892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.618 [2024-10-01 15:59:03.806074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.618 [2024-10-01 15:59:03.806087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.618 [2024-10-01 15:59:03.806116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.618 [2024-10-01 15:59:03.806124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.618 [2024-10-01 15:59:03.806131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.618 [2024-10-01 15:59:03.806140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.618 [2024-10-01 15:59:03.806159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.618 [2024-10-01 15:59:03.806166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.618 [2024-10-01 15:59:03.806180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.618 [2024-10-01 15:59:03.806187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.618 [2024-10-01 15:59:03.815699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.618 [2024-10-01 15:59:03.815720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.618 [2024-10-01 15:59:03.816077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.618 [2024-10-01 15:59:03.816093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.618 [2024-10-01 15:59:03.816101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.618 [2024-10-01 15:59:03.816261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.618 [2024-10-01 15:59:03.816272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.618 [2024-10-01 15:59:03.816278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.618 [2024-10-01 15:59:03.816422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.618 [2024-10-01 15:59:03.816435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.618 [2024-10-01 15:59:03.816573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.618 [2024-10-01 15:59:03.816582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.618 [2024-10-01 15:59:03.816592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.618 [2024-10-01 15:59:03.816601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.618 [2024-10-01 15:59:03.816607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.618 [2024-10-01 15:59:03.816613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.618 [2024-10-01 15:59:03.816643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.618 [2024-10-01 15:59:03.816650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.618 [2024-10-01 15:59:03.826753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.618 [2024-10-01 15:59:03.826776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.618 [2024-10-01 15:59:03.826908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.618 [2024-10-01 15:59:03.826922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.618 [2024-10-01 15:59:03.826929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.619 [2024-10-01 15:59:03.827073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.619 [2024-10-01 15:59:03.827083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.619 [2024-10-01 15:59:03.827090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.619 [2024-10-01 15:59:03.827427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.619 [2024-10-01 15:59:03.827441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.619 [2024-10-01 15:59:03.827702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.619 [2024-10-01 15:59:03.827712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.619 [2024-10-01 15:59:03.827719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.619 [2024-10-01 15:59:03.827730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.619 [2024-10-01 15:59:03.827736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.619 [2024-10-01 15:59:03.827742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.619 [2024-10-01 15:59:03.827793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.619 [2024-10-01 15:59:03.827802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.619 [2024-10-01 15:59:03.838411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.619 [2024-10-01 15:59:03.838432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.619 [2024-10-01 15:59:03.838698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.619 [2024-10-01 15:59:03.838712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.619 [2024-10-01 15:59:03.838719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.619 [2024-10-01 15:59:03.838909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.619 [2024-10-01 15:59:03.838920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.619 [2024-10-01 15:59:03.838930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.619 [2024-10-01 15:59:03.839061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.619 [2024-10-01 15:59:03.839072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.619 [2024-10-01 15:59:03.839267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.619 [2024-10-01 15:59:03.839277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.619 [2024-10-01 15:59:03.839284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.619 [2024-10-01 15:59:03.839294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.619 [2024-10-01 15:59:03.839299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.619 [2024-10-01 15:59:03.839305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.619 [2024-10-01 15:59:03.839344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.619 [2024-10-01 15:59:03.839352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.619 [2024-10-01 15:59:03.848607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.619 [2024-10-01 15:59:03.848629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.619 [2024-10-01 15:59:03.848858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.619 [2024-10-01 15:59:03.848876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.619 [2024-10-01 15:59:03.848884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.619 [2024-10-01 15:59:03.848988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.619 [2024-10-01 15:59:03.848998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.619 [2024-10-01 15:59:03.849004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.619 [2024-10-01 15:59:03.849344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.619 [2024-10-01 15:59:03.849357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.619 [2024-10-01 15:59:03.849518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.619 [2024-10-01 15:59:03.849528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.619 [2024-10-01 15:59:03.849535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.619 [2024-10-01 15:59:03.849544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.619 [2024-10-01 15:59:03.849550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.619 [2024-10-01 15:59:03.849556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.619 [2024-10-01 15:59:03.849729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.619 [2024-10-01 15:59:03.849739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.619 [2024-10-01 15:59:03.859334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.619 [2024-10-01 15:59:03.859357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.619 [2024-10-01 15:59:03.859520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.619 [2024-10-01 15:59:03.859533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.619 [2024-10-01 15:59:03.859540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.619 [2024-10-01 15:59:03.859690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.619 [2024-10-01 15:59:03.859700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.619 [2024-10-01 15:59:03.859706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.619 [2024-10-01 15:59:03.859718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.619 [2024-10-01 15:59:03.859727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.619 [2024-10-01 15:59:03.859737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.619 [2024-10-01 15:59:03.859743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.619 [2024-10-01 15:59:03.859749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.619 [2024-10-01 15:59:03.859757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.619 [2024-10-01 15:59:03.859763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.619 [2024-10-01 15:59:03.859769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.619 [2024-10-01 15:59:03.859782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.619 [2024-10-01 15:59:03.859789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.619 [2024-10-01 15:59:03.871387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.619 [2024-10-01 15:59:03.871409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.619 [2024-10-01 15:59:03.871809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.619 [2024-10-01 15:59:03.871825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.619 [2024-10-01 15:59:03.871833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.619 [2024-10-01 15:59:03.871980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.619 [2024-10-01 15:59:03.871990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.619 [2024-10-01 15:59:03.871996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.619 [2024-10-01 15:59:03.872248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.619 [2024-10-01 15:59:03.872262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.619 [2024-10-01 15:59:03.872409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.619 [2024-10-01 15:59:03.872419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.619 [2024-10-01 15:59:03.872426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.620 [2024-10-01 15:59:03.872439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.620 [2024-10-01 15:59:03.872445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.620 [2024-10-01 15:59:03.872451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.620 [2024-10-01 15:59:03.872480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.620 [2024-10-01 15:59:03.872488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.620 [2024-10-01 15:59:03.883195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.620 [2024-10-01 15:59:03.883216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.620 [2024-10-01 15:59:03.883428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.620 [2024-10-01 15:59:03.883440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.620 [2024-10-01 15:59:03.883447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.620 [2024-10-01 15:59:03.883584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.620 [2024-10-01 15:59:03.883593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.620 [2024-10-01 15:59:03.883600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.620 [2024-10-01 15:59:03.883611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.620 [2024-10-01 15:59:03.883620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.620 [2024-10-01 15:59:03.883629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.620 [2024-10-01 15:59:03.883636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.620 [2024-10-01 15:59:03.883642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.620 [2024-10-01 15:59:03.883650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.620 [2024-10-01 15:59:03.883656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.620 [2024-10-01 15:59:03.883662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.620 [2024-10-01 15:59:03.883675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.620 [2024-10-01 15:59:03.883682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.620 [2024-10-01 15:59:03.895417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.620 [2024-10-01 15:59:03.895438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.620 [2024-10-01 15:59:03.895624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.620 [2024-10-01 15:59:03.895636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.620 [2024-10-01 15:59:03.895643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.620 [2024-10-01 15:59:03.895842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.620 [2024-10-01 15:59:03.895859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.620 [2024-10-01 15:59:03.895872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.620 [2024-10-01 15:59:03.895887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.620 [2024-10-01 15:59:03.895896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.620 [2024-10-01 15:59:03.895906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.620 [2024-10-01 15:59:03.895912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.620 [2024-10-01 15:59:03.895918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.620 [2024-10-01 15:59:03.895926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.620 [2024-10-01 15:59:03.895932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.620 [2024-10-01 15:59:03.895938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.620 [2024-10-01 15:59:03.895951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.620 [2024-10-01 15:59:03.895958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.620 [2024-10-01 15:59:03.907873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.620 [2024-10-01 15:59:03.907895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.620 [2024-10-01 15:59:03.908127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.620 [2024-10-01 15:59:03.908139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.620 [2024-10-01 15:59:03.908147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.620 [2024-10-01 15:59:03.908361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.620 [2024-10-01 15:59:03.908372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.620 [2024-10-01 15:59:03.908378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.620 [2024-10-01 15:59:03.908390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.620 [2024-10-01 15:59:03.908399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.620 [2024-10-01 15:59:03.908417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.620 [2024-10-01 15:59:03.908424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.620 [2024-10-01 15:59:03.908431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.620 [2024-10-01 15:59:03.908439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.620 [2024-10-01 15:59:03.908445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.620 [2024-10-01 15:59:03.908451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.620 [2024-10-01 15:59:03.908465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.620 [2024-10-01 15:59:03.908471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.620 [2024-10-01 15:59:03.920655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.620 [2024-10-01 15:59:03.920676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.620 [2024-10-01 15:59:03.920833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.620 [2024-10-01 15:59:03.920846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.620 [2024-10-01 15:59:03.920853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.620 [2024-10-01 15:59:03.920942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.620 [2024-10-01 15:59:03.920952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.620 [2024-10-01 15:59:03.920959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.620 [2024-10-01 15:59:03.920970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.620 [2024-10-01 15:59:03.920979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.620 [2024-10-01 15:59:03.920989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.620 [2024-10-01 15:59:03.920995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.620 [2024-10-01 15:59:03.921002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.620 [2024-10-01 15:59:03.921010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.620 [2024-10-01 15:59:03.921016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.620 [2024-10-01 15:59:03.921022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.620 [2024-10-01 15:59:03.921035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.620 [2024-10-01 15:59:03.921042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.620 [2024-10-01 15:59:03.932528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.620 [2024-10-01 15:59:03.932550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.620 [2024-10-01 15:59:03.932913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.621 [2024-10-01 15:59:03.932930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.621 [2024-10-01 15:59:03.932938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.621 [2024-10-01 15:59:03.933161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.621 [2024-10-01 15:59:03.933172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.621 [2024-10-01 15:59:03.933179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.621 [2024-10-01 15:59:03.933377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.621 [2024-10-01 15:59:03.933390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.621 [2024-10-01 15:59:03.933413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.621 [2024-10-01 15:59:03.933420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.621 [2024-10-01 15:59:03.933427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.621 [2024-10-01 15:59:03.933436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.621 [2024-10-01 15:59:03.933445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.621 [2024-10-01 15:59:03.933451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.621 [2024-10-01 15:59:03.933579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.621 [2024-10-01 15:59:03.933588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.621 [2024-10-01 15:59:03.944337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.621 [2024-10-01 15:59:03.944358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.621 [2024-10-01 15:59:03.944734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.621 [2024-10-01 15:59:03.944750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.621 [2024-10-01 15:59:03.944757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.621 [2024-10-01 15:59:03.944952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.621 [2024-10-01 15:59:03.944964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.621 [2024-10-01 15:59:03.944971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.621 [2024-10-01 15:59:03.945115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.621 [2024-10-01 15:59:03.945127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.621 [2024-10-01 15:59:03.945177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.621 [2024-10-01 15:59:03.945186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.621 [2024-10-01 15:59:03.945193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.621 [2024-10-01 15:59:03.945201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.621 [2024-10-01 15:59:03.945207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.621 [2024-10-01 15:59:03.945213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.621 [2024-10-01 15:59:03.945337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.621 [2024-10-01 15:59:03.945346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.621 [2024-10-01 15:59:03.955145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.621 [2024-10-01 15:59:03.955167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.621 [2024-10-01 15:59:03.955529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.621 [2024-10-01 15:59:03.955545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.621 [2024-10-01 15:59:03.955553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.621 [2024-10-01 15:59:03.955700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.621 [2024-10-01 15:59:03.955710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.621 [2024-10-01 15:59:03.955716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.621 [2024-10-01 15:59:03.955859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.621 [2024-10-01 15:59:03.955884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.621 [2024-10-01 15:59:03.956023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.621 [2024-10-01 15:59:03.956034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.621 [2024-10-01 15:59:03.956041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.621 [2024-10-01 15:59:03.956049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.621 [2024-10-01 15:59:03.956055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.621 [2024-10-01 15:59:03.956062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.621 [2024-10-01 15:59:03.956091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.621 [2024-10-01 15:59:03.956099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.621 [2024-10-01 15:59:03.967070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.621 [2024-10-01 15:59:03.967090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.621 [2024-10-01 15:59:03.967301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.621 [2024-10-01 15:59:03.967313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.621 [2024-10-01 15:59:03.967320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.621 [2024-10-01 15:59:03.967484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.621 [2024-10-01 15:59:03.967493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.621 [2024-10-01 15:59:03.967500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.621 [2024-10-01 15:59:03.967512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.621 [2024-10-01 15:59:03.967520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.621 [2024-10-01 15:59:03.967530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.621 [2024-10-01 15:59:03.967536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.621 [2024-10-01 15:59:03.967542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.621 [2024-10-01 15:59:03.967551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.621 [2024-10-01 15:59:03.967556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.621 [2024-10-01 15:59:03.967562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.621 [2024-10-01 15:59:03.967576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.621 [2024-10-01 15:59:03.967582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.621 [2024-10-01 15:59:03.978689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.621 [2024-10-01 15:59:03.978712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.621 [2024-10-01 15:59:03.978795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.622 [2024-10-01 15:59:03.978807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.622 [2024-10-01 15:59:03.978818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.622 [2024-10-01 15:59:03.978983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.622 [2024-10-01 15:59:03.978993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.622 [2024-10-01 15:59:03.978999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.622 [2024-10-01 15:59:03.979011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.622 [2024-10-01 15:59:03.979020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.622 [2024-10-01 15:59:03.979029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.622 [2024-10-01 15:59:03.979035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.622 [2024-10-01 15:59:03.979041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.622 [2024-10-01 15:59:03.979049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.622 [2024-10-01 15:59:03.979055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.622 [2024-10-01 15:59:03.979062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.622 [2024-10-01 15:59:03.979075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.622 [2024-10-01 15:59:03.979081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.622 [2024-10-01 15:59:03.990560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.622 [2024-10-01 15:59:03.990582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.622 [2024-10-01 15:59:03.990890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.622 [2024-10-01 15:59:03.990908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.622 [2024-10-01 15:59:03.990916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.622 [2024-10-01 15:59:03.991132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.622 [2024-10-01 15:59:03.991143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.622 [2024-10-01 15:59:03.991150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.622 [2024-10-01 15:59:03.991328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.622 [2024-10-01 15:59:03.991342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.622 [2024-10-01 15:59:03.991367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.622 [2024-10-01 15:59:03.991375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.622 [2024-10-01 15:59:03.991382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.622 [2024-10-01 15:59:03.991391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.622 [2024-10-01 15:59:03.991397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.622 [2024-10-01 15:59:03.991408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.622 [2024-10-01 15:59:03.991422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.622 [2024-10-01 15:59:03.991428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.622 11367.08 IOPS, 44.40 MiB/s [2024-10-01 15:59:04.002212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.622 [2024-10-01 15:59:04.002230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.622 [2024-10-01 15:59:04.002396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.622 [2024-10-01 15:59:04.002408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.622 [2024-10-01 15:59:04.002416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.622 [2024-10-01 15:59:04.002634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.622 [2024-10-01 15:59:04.002643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.622 [2024-10-01 15:59:04.002650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.622 [2024-10-01 15:59:04.003438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.622 [2024-10-01 15:59:04.003454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.622 [2024-10-01 15:59:04.003641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.622 [2024-10-01 15:59:04.003652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.622 [2024-10-01 15:59:04.003660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.622 [2024-10-01 15:59:04.003669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.622 [2024-10-01 15:59:04.003675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.622 [2024-10-01 15:59:04.003682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.622 [2024-10-01 15:59:04.003705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.622 [2024-10-01 15:59:04.003712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.622 [2024-10-01 15:59:04.012831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.622 [2024-10-01 15:59:04.012853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.622 [2024-10-01 15:59:04.013049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.622 [2024-10-01 15:59:04.013061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.622 [2024-10-01 15:59:04.013069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.622 [2024-10-01 15:59:04.013231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.622 [2024-10-01 15:59:04.013241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.622 [2024-10-01 15:59:04.013248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.622 [2024-10-01 15:59:04.013260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.622 [2024-10-01 15:59:04.013269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.622 [2024-10-01 15:59:04.013283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.622 [2024-10-01 15:59:04.013289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.622 [2024-10-01 15:59:04.013296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.622 [2024-10-01 15:59:04.013306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.622 [2024-10-01 15:59:04.013312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.622 [2024-10-01 15:59:04.013319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.622 [2024-10-01 15:59:04.013333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.622 [2024-10-01 15:59:04.013340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.622 [2024-10-01 15:59:04.024111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.622 [2024-10-01 15:59:04.024132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.622 [2024-10-01 15:59:04.024462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.622 [2024-10-01 15:59:04.024478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.622 [2024-10-01 15:59:04.024486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.622 [2024-10-01 15:59:04.024657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.622 [2024-10-01 15:59:04.024667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.622 [2024-10-01 15:59:04.024673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.622 [2024-10-01 15:59:04.024929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.622 [2024-10-01 15:59:04.024943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.622 [2024-10-01 15:59:04.025090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.622 [2024-10-01 15:59:04.025100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.622 [2024-10-01 15:59:04.025107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.622 [2024-10-01 15:59:04.025116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.622 [2024-10-01 15:59:04.025122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.622 [2024-10-01 15:59:04.025128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.622 [2024-10-01 15:59:04.025154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.622 [2024-10-01 15:59:04.025161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.622 [2024-10-01 15:59:04.036024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.622 [2024-10-01 15:59:04.036044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.622 [2024-10-01 15:59:04.036368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.622 [2024-10-01 15:59:04.036383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.622 [2024-10-01 15:59:04.036395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.623 [2024-10-01 15:59:04.036484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.623 [2024-10-01 15:59:04.036493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.623 [2024-10-01 15:59:04.036500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.623 [2024-10-01 15:59:04.036642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.623 [2024-10-01 15:59:04.036654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.623 [2024-10-01 15:59:04.036802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.623 [2024-10-01 15:59:04.036811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.623 [2024-10-01 15:59:04.036818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.623 [2024-10-01 15:59:04.036827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.623 [2024-10-01 15:59:04.036833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.623 [2024-10-01 15:59:04.036839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.623 [2024-10-01 15:59:04.036873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.623 [2024-10-01 15:59:04.036880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.623 [2024-10-01 15:59:04.046467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.623 [2024-10-01 15:59:04.046488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.623 [2024-10-01 15:59:04.046708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.623 [2024-10-01 15:59:04.046721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.623 [2024-10-01 15:59:04.046728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.623 [2024-10-01 15:59:04.046873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.623 [2024-10-01 15:59:04.046884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.623 [2024-10-01 15:59:04.046891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.623 [2024-10-01 15:59:04.046903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.623 [2024-10-01 15:59:04.046912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.623 [2024-10-01 15:59:04.046921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.623 [2024-10-01 15:59:04.046927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.623 [2024-10-01 15:59:04.046934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.623 [2024-10-01 15:59:04.046942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.623 [2024-10-01 15:59:04.046947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.623 [2024-10-01 15:59:04.046953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.623 [2024-10-01 15:59:04.046971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.623 [2024-10-01 15:59:04.046978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.623 [2024-10-01 15:59:04.056991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.623 [2024-10-01 15:59:04.057013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.623 [2024-10-01 15:59:04.057191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.623 [2024-10-01 15:59:04.057203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.623 [2024-10-01 15:59:04.057213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.623 [2024-10-01 15:59:04.057311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.623 [2024-10-01 15:59:04.057320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.623 [2024-10-01 15:59:04.057327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.623 [2024-10-01 15:59:04.057580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.623 [2024-10-01 15:59:04.057592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.623 [2024-10-01 15:59:04.058188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.623 [2024-10-01 15:59:04.058200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.623 [2024-10-01 15:59:04.058207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.623 [2024-10-01 15:59:04.058216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.623 [2024-10-01 15:59:04.058222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.623 [2024-10-01 15:59:04.058228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.623 [2024-10-01 15:59:04.058715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.623 [2024-10-01 15:59:04.058727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.623 [2024-10-01 15:59:04.068805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.623 [2024-10-01 15:59:04.068825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.623 [2024-10-01 15:59:04.068960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.623 [2024-10-01 15:59:04.068973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.623 [2024-10-01 15:59:04.068981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.623 [2024-10-01 15:59:04.069117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.623 [2024-10-01 15:59:04.069127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.623 [2024-10-01 15:59:04.069133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.623 [2024-10-01 15:59:04.069256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.623 [2024-10-01 15:59:04.069269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.623 [2024-10-01 15:59:04.069352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.623 [2024-10-01 15:59:04.069365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.623 [2024-10-01 15:59:04.069371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.623 [2024-10-01 15:59:04.069381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.623 [2024-10-01 15:59:04.069387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.623 [2024-10-01 15:59:04.069393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.623 [2024-10-01 15:59:04.069417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.623 [2024-10-01 15:59:04.069424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.623 [2024-10-01 15:59:04.079268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.623 [2024-10-01 15:59:04.079288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.623 [2024-10-01 15:59:04.079438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.623 [2024-10-01 15:59:04.079451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.623 [2024-10-01 15:59:04.079458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.623 [2024-10-01 15:59:04.079525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.623 [2024-10-01 15:59:04.079534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.623 [2024-10-01 15:59:04.079541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.623 [2024-10-01 15:59:04.079553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.623 [2024-10-01 15:59:04.079561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.623 [2024-10-01 15:59:04.079572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.623 [2024-10-01 15:59:04.079578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.623 [2024-10-01 15:59:04.079585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.623 [2024-10-01 15:59:04.079593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.623 [2024-10-01 15:59:04.079598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.623 [2024-10-01 15:59:04.079604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.624 [2024-10-01 15:59:04.079618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.624 [2024-10-01 15:59:04.079624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.624 [2024-10-01 15:59:04.092021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.624 [2024-10-01 15:59:04.092042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.624 [2024-10-01 15:59:04.092515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.624 [2024-10-01 15:59:04.092532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.624 [2024-10-01 15:59:04.092540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.624 [2024-10-01 15:59:04.092599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.624 [2024-10-01 15:59:04.092608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.624 [2024-10-01 15:59:04.092615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.624 [2024-10-01 15:59:04.093260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.624 [2024-10-01 15:59:04.093278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.624 [2024-10-01 15:59:04.093654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.624 [2024-10-01 15:59:04.093665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.624 [2024-10-01 15:59:04.093672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.624 [2024-10-01 15:59:04.093681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.624 [2024-10-01 15:59:04.093688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.624 [2024-10-01 15:59:04.093694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.624 [2024-10-01 15:59:04.093743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.624 [2024-10-01 15:59:04.093751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.624 [2024-10-01 15:59:04.102741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.624 [2024-10-01 15:59:04.102762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.624 [2024-10-01 15:59:04.102941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.624 [2024-10-01 15:59:04.102955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.624 [2024-10-01 15:59:04.102963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.624 [2024-10-01 15:59:04.103131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.624 [2024-10-01 15:59:04.103139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.624 [2024-10-01 15:59:04.103146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.624 [2024-10-01 15:59:04.103308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.624 [2024-10-01 15:59:04.103320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.624 [2024-10-01 15:59:04.103463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.624 [2024-10-01 15:59:04.103472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.624 [2024-10-01 15:59:04.103478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.624 [2024-10-01 15:59:04.103488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.624 [2024-10-01 15:59:04.103494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.624 [2024-10-01 15:59:04.103500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.624 [2024-10-01 15:59:04.103642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.624 [2024-10-01 15:59:04.103651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.624 [2024-10-01 15:59:04.113347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.624 [2024-10-01 15:59:04.113368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.624 [2024-10-01 15:59:04.113519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.624 [2024-10-01 15:59:04.113532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.624 [2024-10-01 15:59:04.113540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.624 [2024-10-01 15:59:04.113730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.624 [2024-10-01 15:59:04.113740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.624 [2024-10-01 15:59:04.113747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.624 [2024-10-01 15:59:04.113759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.624 [2024-10-01 15:59:04.113768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.624 [2024-10-01 15:59:04.113777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.624 [2024-10-01 15:59:04.113783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.624 [2024-10-01 15:59:04.113790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.624 [2024-10-01 15:59:04.113798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.624 [2024-10-01 15:59:04.113804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.624 [2024-10-01 15:59:04.113810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.624 [2024-10-01 15:59:04.113824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.624 [2024-10-01 15:59:04.113830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.624 [2024-10-01 15:59:04.125561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.624 [2024-10-01 15:59:04.125582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.624 [2024-10-01 15:59:04.125730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.624 [2024-10-01 15:59:04.125742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.624 [2024-10-01 15:59:04.125749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.624 [2024-10-01 15:59:04.125890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.624 [2024-10-01 15:59:04.125900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.624 [2024-10-01 15:59:04.125907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.624 [2024-10-01 15:59:04.125918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.624 [2024-10-01 15:59:04.125927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.624 [2024-10-01 15:59:04.125937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.624 [2024-10-01 15:59:04.125943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.624 [2024-10-01 15:59:04.125953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.624 [2024-10-01 15:59:04.125962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.624 [2024-10-01 15:59:04.125968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.624 [2024-10-01 15:59:04.125973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.624 [2024-10-01 15:59:04.126361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.624 [2024-10-01 15:59:04.126371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.624 [2024-10-01 15:59:04.137750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.624 [2024-10-01 15:59:04.137771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.625 [2024-10-01 15:59:04.137966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.625 [2024-10-01 15:59:04.137980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.625 [2024-10-01 15:59:04.137987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.625 [2024-10-01 15:59:04.138131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.625 [2024-10-01 15:59:04.138140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.625 [2024-10-01 15:59:04.138147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.625 [2024-10-01 15:59:04.138302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.625 [2024-10-01 15:59:04.138315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.625 [2024-10-01 15:59:04.138454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.625 [2024-10-01 15:59:04.138466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.625 [2024-10-01 15:59:04.138473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.625 [2024-10-01 15:59:04.138482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.625 [2024-10-01 15:59:04.138488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.625 [2024-10-01 15:59:04.138494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.625 [2024-10-01 15:59:04.138638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.625 [2024-10-01 15:59:04.138647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.625 [2024-10-01 15:59:04.148305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.625 [2024-10-01 15:59:04.148326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.625 [2024-10-01 15:59:04.148490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.625 [2024-10-01 15:59:04.148503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.625 [2024-10-01 15:59:04.148510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.625 [2024-10-01 15:59:04.148634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.625 [2024-10-01 15:59:04.148643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.625 [2024-10-01 15:59:04.148653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.625 [2024-10-01 15:59:04.148665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.625 [2024-10-01 15:59:04.148673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.625 [2024-10-01 15:59:04.148683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.625 [2024-10-01 15:59:04.148689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.625 [2024-10-01 15:59:04.148695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.625 [2024-10-01 15:59:04.148704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.625 [2024-10-01 15:59:04.148709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.625 [2024-10-01 15:59:04.148715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.625 [2024-10-01 15:59:04.148728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.625 [2024-10-01 15:59:04.148735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.625 [2024-10-01 15:59:04.160442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.625 [2024-10-01 15:59:04.160463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.625 [2024-10-01 15:59:04.160782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.625 [2024-10-01 15:59:04.160797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.625 [2024-10-01 15:59:04.160805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.625 [2024-10-01 15:59:04.160948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.625 [2024-10-01 15:59:04.160959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.625 [2024-10-01 15:59:04.160965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.625 [2024-10-01 15:59:04.161109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.625 [2024-10-01 15:59:04.161121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.625 [2024-10-01 15:59:04.161146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.625 [2024-10-01 15:59:04.161154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.625 [2024-10-01 15:59:04.161160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.625 [2024-10-01 15:59:04.161168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.625 [2024-10-01 15:59:04.161174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.625 [2024-10-01 15:59:04.161180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.625 [2024-10-01 15:59:04.161194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.625 [2024-10-01 15:59:04.161200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.625 [2024-10-01 15:59:04.171310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.625 [2024-10-01 15:59:04.171334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.625 [2024-10-01 15:59:04.171570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.625 [2024-10-01 15:59:04.171582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.625 [2024-10-01 15:59:04.171590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.625 [2024-10-01 15:59:04.171807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.625 [2024-10-01 15:59:04.171817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.625 [2024-10-01 15:59:04.171823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.625 [2024-10-01 15:59:04.171835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.625 [2024-10-01 15:59:04.171844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.625 [2024-10-01 15:59:04.171854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.625 [2024-10-01 15:59:04.171860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.625 [2024-10-01 15:59:04.171871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.625 [2024-10-01 15:59:04.171879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.625 [2024-10-01 15:59:04.171885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.625 [2024-10-01 15:59:04.171891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.625 [2024-10-01 15:59:04.171904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.625 [2024-10-01 15:59:04.171911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.625 [2024-10-01 15:59:04.183704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.625 [2024-10-01 15:59:04.183725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.625 [2024-10-01 15:59:04.183957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.625 [2024-10-01 15:59:04.183970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.625 [2024-10-01 15:59:04.183978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.625 [2024-10-01 15:59:04.184141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.625 [2024-10-01 15:59:04.184151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.625 [2024-10-01 15:59:04.184157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.626 [2024-10-01 15:59:04.184569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.626 [2024-10-01 15:59:04.184584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.626 [2024-10-01 15:59:04.184693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.626 [2024-10-01 15:59:04.184701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.626 [2024-10-01 15:59:04.184707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.626 [2024-10-01 15:59:04.184720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.626 [2024-10-01 15:59:04.184726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.626 [2024-10-01 15:59:04.184732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.626 [2024-10-01 15:59:04.184810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.626 [2024-10-01 15:59:04.184819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.626 [2024-10-01 15:59:04.193784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.626 [2024-10-01 15:59:04.193981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.626 [2024-10-01 15:59:04.194192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.626 [2024-10-01 15:59:04.194207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.626 [2024-10-01 15:59:04.194215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.626 [2024-10-01 15:59:04.194489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.626 [2024-10-01 15:59:04.194504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.626 [2024-10-01 15:59:04.194511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.626 [2024-10-01 15:59:04.194520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.626 [2024-10-01 15:59:04.195000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.626 [2024-10-01 15:59:04.195013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.626 [2024-10-01 15:59:04.195019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.626 [2024-10-01 15:59:04.195026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.626 [2024-10-01 15:59:04.195255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.626 [2024-10-01 15:59:04.195266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.626 [2024-10-01 15:59:04.195271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.626 [2024-10-01 15:59:04.195278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.626 [2024-10-01 15:59:04.195424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.626 [2024-10-01 15:59:04.205195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.626 [2024-10-01 15:59:04.205215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.626 [2024-10-01 15:59:04.205374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.626 [2024-10-01 15:59:04.205386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.626 [2024-10-01 15:59:04.205394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.626 [2024-10-01 15:59:04.205541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.626 [2024-10-01 15:59:04.205551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.626 [2024-10-01 15:59:04.205558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.626 [2024-10-01 15:59:04.205847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.626 [2024-10-01 15:59:04.205868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.626 [2024-10-01 15:59:04.206029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.626 [2024-10-01 15:59:04.206040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.626 [2024-10-01 15:59:04.206046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.626 [2024-10-01 15:59:04.206056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.626 [2024-10-01 15:59:04.206062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.626 [2024-10-01 15:59:04.206068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.626 [2024-10-01 15:59:04.206475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.626 [2024-10-01 15:59:04.206487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.626 [2024-10-01 15:59:04.215722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.626 [2024-10-01 15:59:04.215742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.626 [2024-10-01 15:59:04.215934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.626 [2024-10-01 15:59:04.215947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.626 [2024-10-01 15:59:04.215955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.626 [2024-10-01 15:59:04.216147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.626 [2024-10-01 15:59:04.216157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.626 [2024-10-01 15:59:04.216164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.626 [2024-10-01 15:59:04.216175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.626 [2024-10-01 15:59:04.216184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.626 [2024-10-01 15:59:04.216194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.626 [2024-10-01 15:59:04.216201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.626 [2024-10-01 15:59:04.216208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.626 [2024-10-01 15:59:04.216217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.626 [2024-10-01 15:59:04.216223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.626 [2024-10-01 15:59:04.216229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.626 [2024-10-01 15:59:04.216242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.626 [2024-10-01 15:59:04.216249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.626 [2024-10-01 15:59:04.227938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.626 [2024-10-01 15:59:04.227959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.626 [2024-10-01 15:59:04.228350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.626 [2024-10-01 15:59:04.228366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.626 [2024-10-01 15:59:04.228374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.626 [2024-10-01 15:59:04.228524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.626 [2024-10-01 15:59:04.228534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.626 [2024-10-01 15:59:04.228540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.626 [2024-10-01 15:59:04.228745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.626 [2024-10-01 15:59:04.228760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.626 [2024-10-01 15:59:04.228966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.626 [2024-10-01 15:59:04.228978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.626 [2024-10-01 15:59:04.228985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.626 [2024-10-01 15:59:04.228994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.626 [2024-10-01 15:59:04.229000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.626 [2024-10-01 15:59:04.229006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.626 [2024-10-01 15:59:04.229044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.627 [2024-10-01 15:59:04.229052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.627 [2024-10-01 15:59:04.239934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.627 [2024-10-01 15:59:04.239956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.627 [2024-10-01 15:59:04.240344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.627 [2024-10-01 15:59:04.240360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.627 [2024-10-01 15:59:04.240368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.627 [2024-10-01 15:59:04.240585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.627 [2024-10-01 15:59:04.240595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.627 [2024-10-01 15:59:04.240602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.627 [2024-10-01 15:59:04.240746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.627 [2024-10-01 15:59:04.240759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.627 [2024-10-01 15:59:04.240967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.627 [2024-10-01 15:59:04.240978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.627 [2024-10-01 15:59:04.240985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.627 [2024-10-01 15:59:04.240995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.627 [2024-10-01 15:59:04.241007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.627 [2024-10-01 15:59:04.241013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.627 [2024-10-01 15:59:04.241045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.627 [2024-10-01 15:59:04.241054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.627 [2024-10-01 15:59:04.250510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.627 [2024-10-01 15:59:04.250532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.627 [2024-10-01 15:59:04.250854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.627 [2024-10-01 15:59:04.250876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.627 [2024-10-01 15:59:04.250885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.627 [2024-10-01 15:59:04.251078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.627 [2024-10-01 15:59:04.251089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.627 [2024-10-01 15:59:04.251096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.627 [2024-10-01 15:59:04.251244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.627 [2024-10-01 15:59:04.251256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.627 [2024-10-01 15:59:04.251284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.627 [2024-10-01 15:59:04.251291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.627 [2024-10-01 15:59:04.251298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.627 [2024-10-01 15:59:04.251307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.627 [2024-10-01 15:59:04.251313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.627 [2024-10-01 15:59:04.251320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.627 [2024-10-01 15:59:04.251447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.627 [2024-10-01 15:59:04.251456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.627 [2024-10-01 15:59:04.260912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.627 [2024-10-01 15:59:04.260933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.627 [2024-10-01 15:59:04.261048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.627 [2024-10-01 15:59:04.261061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.627 [2024-10-01 15:59:04.261068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.627 [2024-10-01 15:59:04.261216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.627 [2024-10-01 15:59:04.261225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.627 [2024-10-01 15:59:04.261232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.627 [2024-10-01 15:59:04.261243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.627 [2024-10-01 15:59:04.261257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.627 [2024-10-01 15:59:04.261267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.627 [2024-10-01 15:59:04.261273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.627 [2024-10-01 15:59:04.261279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.627 [2024-10-01 15:59:04.261288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.627 [2024-10-01 15:59:04.261293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.627 [2024-10-01 15:59:04.261299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.627 [2024-10-01 15:59:04.261313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.627 [2024-10-01 15:59:04.261319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.627 [2024-10-01 15:59:04.272093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.627 [2024-10-01 15:59:04.272114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.627 [2024-10-01 15:59:04.272350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.627 [2024-10-01 15:59:04.272362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.627 [2024-10-01 15:59:04.272370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.627 [2024-10-01 15:59:04.272515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.627 [2024-10-01 15:59:04.272524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.627 [2024-10-01 15:59:04.272531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.627 [2024-10-01 15:59:04.272543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.627 [2024-10-01 15:59:04.272553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.627 [2024-10-01 15:59:04.272562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.627 [2024-10-01 15:59:04.272568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.627 [2024-10-01 15:59:04.272574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.627 [2024-10-01 15:59:04.272583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.627 [2024-10-01 15:59:04.272589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.627 [2024-10-01 15:59:04.272595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.627 [2024-10-01 15:59:04.272609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.627 [2024-10-01 15:59:04.272615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.627 [2024-10-01 15:59:04.283457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.627 [2024-10-01 15:59:04.283478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.627 [2024-10-01 15:59:04.283917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.627 [2024-10-01 15:59:04.283939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.627 [2024-10-01 15:59:04.283947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.627 [2024-10-01 15:59:04.284084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.627 [2024-10-01 15:59:04.284094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.627 [2024-10-01 15:59:04.284101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.627 [2024-10-01 15:59:04.284246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.627 [2024-10-01 15:59:04.284258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.627 [2024-10-01 15:59:04.284284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.627 [2024-10-01 15:59:04.284292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.628 [2024-10-01 15:59:04.284298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.628 [2024-10-01 15:59:04.284307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.628 [2024-10-01 15:59:04.284313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.628 [2024-10-01 15:59:04.284319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.628 [2024-10-01 15:59:04.284333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.628 [2024-10-01 15:59:04.284339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.628 [2024-10-01 15:59:04.293539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.628 [2024-10-01 15:59:04.293568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.628 [2024-10-01 15:59:04.293776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.628 [2024-10-01 15:59:04.293789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.628 [2024-10-01 15:59:04.293796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.628 [2024-10-01 15:59:04.293993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.628 [2024-10-01 15:59:04.294004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.628 [2024-10-01 15:59:04.294011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.628 [2024-10-01 15:59:04.294020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.628 [2024-10-01 15:59:04.294031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.628 [2024-10-01 15:59:04.294039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.628 [2024-10-01 15:59:04.294045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.628 [2024-10-01 15:59:04.294051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.628 [2024-10-01 15:59:04.294064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.628 [2024-10-01 15:59:04.294071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.628 [2024-10-01 15:59:04.294080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.628 [2024-10-01 15:59:04.294086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.628 [2024-10-01 15:59:04.294097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.628 [2024-10-01 15:59:04.304800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.628 [2024-10-01 15:59:04.304821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.628 [2024-10-01 15:59:04.305057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.628 [2024-10-01 15:59:04.305070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.628 [2024-10-01 15:59:04.305078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.628 [2024-10-01 15:59:04.305269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.628 [2024-10-01 15:59:04.305280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.628 [2024-10-01 15:59:04.305287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.628 [2024-10-01 15:59:04.305299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.628 [2024-10-01 15:59:04.305308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.628 [2024-10-01 15:59:04.305318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.628 [2024-10-01 15:59:04.305324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.628 [2024-10-01 15:59:04.305330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.628 [2024-10-01 15:59:04.305339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.628 [2024-10-01 15:59:04.305345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.628 [2024-10-01 15:59:04.305351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.628 [2024-10-01 15:59:04.305364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.628 [2024-10-01 15:59:04.305371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.628 [2024-10-01 15:59:04.317445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.628 [2024-10-01 15:59:04.317467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.628 [2024-10-01 15:59:04.317832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.628 [2024-10-01 15:59:04.317848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.628 [2024-10-01 15:59:04.317856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.628 [2024-10-01 15:59:04.318077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.628 [2024-10-01 15:59:04.318088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.628 [2024-10-01 15:59:04.318095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.628 [2024-10-01 15:59:04.318350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.628 [2024-10-01 15:59:04.318363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.628 [2024-10-01 15:59:04.318534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.628 [2024-10-01 15:59:04.318545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.628 [2024-10-01 15:59:04.318552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.628 [2024-10-01 15:59:04.318561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.628 [2024-10-01 15:59:04.318568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.628 [2024-10-01 15:59:04.318574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.628 [2024-10-01 15:59:04.318716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.628 [2024-10-01 15:59:04.318726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.628 [2024-10-01 15:59:04.328705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.628 [2024-10-01 15:59:04.328726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.628 [2024-10-01 15:59:04.328955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.628 [2024-10-01 15:59:04.328969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.628 [2024-10-01 15:59:04.328977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.628 [2024-10-01 15:59:04.329142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.628 [2024-10-01 15:59:04.329153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.628 [2024-10-01 15:59:04.329160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.628 [2024-10-01 15:59:04.329353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.628 [2024-10-01 15:59:04.329366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.628 [2024-10-01 15:59:04.329459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.628 [2024-10-01 15:59:04.329467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.628 [2024-10-01 15:59:04.329473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.628 [2024-10-01 15:59:04.329482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.628 [2024-10-01 15:59:04.329488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.628 [2024-10-01 15:59:04.329495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.628 [2024-10-01 15:59:04.329515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.628 [2024-10-01 15:59:04.329523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.628 [2024-10-01 15:59:04.339779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.628 [2024-10-01 15:59:04.339801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.628 [2024-10-01 15:59:04.340011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.628 [2024-10-01 15:59:04.340024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.628 [2024-10-01 15:59:04.340035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.628 [2024-10-01 15:59:04.340182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.628 [2024-10-01 15:59:04.340192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.628 [2024-10-01 15:59:04.340198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.628 [2024-10-01 15:59:04.340330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.628 [2024-10-01 15:59:04.340341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.628 [2024-10-01 15:59:04.340480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.628 [2024-10-01 15:59:04.340489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.628 [2024-10-01 15:59:04.340496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.628 [2024-10-01 15:59:04.340504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.628 [2024-10-01 15:59:04.340510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.628 [2024-10-01 15:59:04.340517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.628 [2024-10-01 15:59:04.340546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.628 [2024-10-01 15:59:04.340554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.628 [2024-10-01 15:59:04.350900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.628 [2024-10-01 15:59:04.350921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.628 [2024-10-01 15:59:04.351236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.628 [2024-10-01 15:59:04.351251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.628 [2024-10-01 15:59:04.351258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.628 [2024-10-01 15:59:04.351458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.628 [2024-10-01 15:59:04.351468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.628 [2024-10-01 15:59:04.351475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.628 [2024-10-01 15:59:04.351647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.628 [2024-10-01 15:59:04.351660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.628 [2024-10-01 15:59:04.352480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.629 [2024-10-01 15:59:04.352493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.629 [2024-10-01 15:59:04.352501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.629 [2024-10-01 15:59:04.352510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.629 [2024-10-01 15:59:04.352516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.629 [2024-10-01 15:59:04.352522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.629 [2024-10-01 15:59:04.352834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.629 [2024-10-01 15:59:04.352845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.629 [2024-10-01 15:59:04.360980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.629 [2024-10-01 15:59:04.361009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.629 [2024-10-01 15:59:04.361235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.629 [2024-10-01 15:59:04.361247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.629 [2024-10-01 15:59:04.361254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.629 [2024-10-01 15:59:04.361410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.629 [2024-10-01 15:59:04.361420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.629 [2024-10-01 15:59:04.361426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.629 [2024-10-01 15:59:04.361435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.629 [2024-10-01 15:59:04.361447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.629 [2024-10-01 15:59:04.361454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.629 [2024-10-01 15:59:04.361460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.629 [2024-10-01 15:59:04.361466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.629 [2024-10-01 15:59:04.361479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.629 [2024-10-01 15:59:04.361485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.629 [2024-10-01 15:59:04.361491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.629 [2024-10-01 15:59:04.361497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.629 [2024-10-01 15:59:04.361509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.629 [2024-10-01 15:59:04.371693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.629 [2024-10-01 15:59:04.371713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.629 [2024-10-01 15:59:04.371874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.629 [2024-10-01 15:59:04.371887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.629 [2024-10-01 15:59:04.371894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.629 [2024-10-01 15:59:04.372035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.629 [2024-10-01 15:59:04.372045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.629 [2024-10-01 15:59:04.372051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.629 [2024-10-01 15:59:04.372062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.629 [2024-10-01 15:59:04.372072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.629 [2024-10-01 15:59:04.372081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.629 [2024-10-01 15:59:04.372091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.629 [2024-10-01 15:59:04.372097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.629 [2024-10-01 15:59:04.372106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.629 [2024-10-01 15:59:04.372112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.629 [2024-10-01 15:59:04.372118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.629 [2024-10-01 15:59:04.372131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.629 [2024-10-01 15:59:04.372138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.629 [2024-10-01 15:59:04.381814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.629 [2024-10-01 15:59:04.381835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.629 [2024-10-01 15:59:04.382077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.629 [2024-10-01 15:59:04.382090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.629 [2024-10-01 15:59:04.382097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.629 [2024-10-01 15:59:04.382239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.629 [2024-10-01 15:59:04.382249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.629 [2024-10-01 15:59:04.382256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.629 [2024-10-01 15:59:04.382267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.629 [2024-10-01 15:59:04.382276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.629 [2024-10-01 15:59:04.382286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.629 [2024-10-01 15:59:04.382292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.629 [2024-10-01 15:59:04.382299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.629 [2024-10-01 15:59:04.382307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.629 [2024-10-01 15:59:04.382314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.629 [2024-10-01 15:59:04.382320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.629 [2024-10-01 15:59:04.382333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.629 [2024-10-01 15:59:04.382339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.629 [2024-10-01 15:59:04.392878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.629 [2024-10-01 15:59:04.392900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.629 [2024-10-01 15:59:04.393136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.629 [2024-10-01 15:59:04.393148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.629 [2024-10-01 15:59:04.393156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.629 [2024-10-01 15:59:04.393302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.629 [2024-10-01 15:59:04.393312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.629 [2024-10-01 15:59:04.393319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.629 [2024-10-01 15:59:04.393331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.629 [2024-10-01 15:59:04.393340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.629 [2024-10-01 15:59:04.393349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.629 [2024-10-01 15:59:04.393355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.629 [2024-10-01 15:59:04.393362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.629 [2024-10-01 15:59:04.393370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.629 [2024-10-01 15:59:04.393376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.629 [2024-10-01 15:59:04.393382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.629 [2024-10-01 15:59:04.393396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.629 [2024-10-01 15:59:04.393402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.629 [2024-10-01 15:59:04.404704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.629 [2024-10-01 15:59:04.404725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.629 [2024-10-01 15:59:04.404923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.629 [2024-10-01 15:59:04.404937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.629 [2024-10-01 15:59:04.404945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.629 [2024-10-01 15:59:04.405090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.629 [2024-10-01 15:59:04.405100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.629 [2024-10-01 15:59:04.405107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.629 [2024-10-01 15:59:04.405347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.629 [2024-10-01 15:59:04.405360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.629 [2024-10-01 15:59:04.405396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.630 [2024-10-01 15:59:04.405404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.630 [2024-10-01 15:59:04.405411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.630 [2024-10-01 15:59:04.405420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.630 [2024-10-01 15:59:04.405425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.630 [2024-10-01 15:59:04.405432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.630 [2024-10-01 15:59:04.405560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.630 [2024-10-01 15:59:04.405572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.630 [2024-10-01 15:59:04.414883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.630 [2024-10-01 15:59:04.414903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.630 [2024-10-01 15:59:04.415135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.630 [2024-10-01 15:59:04.415148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.630 [2024-10-01 15:59:04.415155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.630 [2024-10-01 15:59:04.415363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.630 [2024-10-01 15:59:04.415374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.630 [2024-10-01 15:59:04.415380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.630 [2024-10-01 15:59:04.415392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.630 [2024-10-01 15:59:04.415402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.630 [2024-10-01 15:59:04.415411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.630 [2024-10-01 15:59:04.415418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.630 [2024-10-01 15:59:04.415424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.630 [2024-10-01 15:59:04.415432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.630 [2024-10-01 15:59:04.415438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.630 [2024-10-01 15:59:04.415444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.630 [2024-10-01 15:59:04.415458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.630 [2024-10-01 15:59:04.415464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.630 [2024-10-01 15:59:04.427543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.630 [2024-10-01 15:59:04.427564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.630 [2024-10-01 15:59:04.427713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.630 [2024-10-01 15:59:04.427725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.630 [2024-10-01 15:59:04.427732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.630 [2024-10-01 15:59:04.427948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.630 [2024-10-01 15:59:04.427959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.630 [2024-10-01 15:59:04.427966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.630 [2024-10-01 15:59:04.427985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.630 [2024-10-01 15:59:04.427995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.630 [2024-10-01 15:59:04.428004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.630 [2024-10-01 15:59:04.428011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.630 [2024-10-01 15:59:04.428020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.630 [2024-10-01 15:59:04.428029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.630 [2024-10-01 15:59:04.428034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.630 [2024-10-01 15:59:04.428040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.630 [2024-10-01 15:59:04.428054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.630 [2024-10-01 15:59:04.428060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.630 [2024-10-01 15:59:04.438226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.630 [2024-10-01 15:59:04.438247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.630 [2024-10-01 15:59:04.438458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.630 [2024-10-01 15:59:04.438470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.630 [2024-10-01 15:59:04.438477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.630 [2024-10-01 15:59:04.438637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.630 [2024-10-01 15:59:04.438648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.630 [2024-10-01 15:59:04.438654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.630 [2024-10-01 15:59:04.438666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.630 [2024-10-01 15:59:04.438674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.630 [2024-10-01 15:59:04.438685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.630 [2024-10-01 15:59:04.438691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.630 [2024-10-01 15:59:04.438698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.630 [2024-10-01 15:59:04.438706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.630 [2024-10-01 15:59:04.438712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.630 [2024-10-01 15:59:04.438718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.630 [2024-10-01 15:59:04.438731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.630 [2024-10-01 15:59:04.438738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.630 [2024-10-01 15:59:04.449969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.630 [2024-10-01 15:59:04.449991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.630 [2024-10-01 15:59:04.450162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.630 [2024-10-01 15:59:04.450175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.630 [2024-10-01 15:59:04.450183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.630 [2024-10-01 15:59:04.450395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.630 [2024-10-01 15:59:04.450404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.630 [2024-10-01 15:59:04.450415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.630 [2024-10-01 15:59:04.450426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.630 [2024-10-01 15:59:04.450435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.630 [2024-10-01 15:59:04.450445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.630 [2024-10-01 15:59:04.450451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.630 [2024-10-01 15:59:04.450457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.630 [2024-10-01 15:59:04.450467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.630 [2024-10-01 15:59:04.450473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.630 [2024-10-01 15:59:04.450478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.630 [2024-10-01 15:59:04.450491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.630 [2024-10-01 15:59:04.450498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.630 [2024-10-01 15:59:04.461472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.630 [2024-10-01 15:59:04.461494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.630 [2024-10-01 15:59:04.461728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.630 [2024-10-01 15:59:04.461741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.630 [2024-10-01 15:59:04.461748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.630 [2024-10-01 15:59:04.461884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.630 [2024-10-01 15:59:04.461894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.630 [2024-10-01 15:59:04.461901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.630 [2024-10-01 15:59:04.461912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.630 [2024-10-01 15:59:04.461921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.630 [2024-10-01 15:59:04.461931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.630 [2024-10-01 15:59:04.461937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.630 [2024-10-01 15:59:04.461944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.630 [2024-10-01 15:59:04.461953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.630 [2024-10-01 15:59:04.461958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.630 [2024-10-01 15:59:04.461964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.630 [2024-10-01 15:59:04.461978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.630 [2024-10-01 15:59:04.461984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.631 [2024-10-01 15:59:04.473046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.631 [2024-10-01 15:59:04.473069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.631 [2024-10-01 15:59:04.473234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-10-01 15:59:04.473246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.631 [2024-10-01 15:59:04.473253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.631 [2024-10-01 15:59:04.473445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-10-01 15:59:04.473455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.631 [2024-10-01 15:59:04.473462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.631 [2024-10-01 15:59:04.473473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.631 [2024-10-01 15:59:04.473482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.631 [2024-10-01 15:59:04.473492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.631 [2024-10-01 15:59:04.473498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.631 [2024-10-01 15:59:04.473505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.631 [2024-10-01 15:59:04.473512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.631 [2024-10-01 15:59:04.473518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.631 [2024-10-01 15:59:04.473524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.631 [2024-10-01 15:59:04.473537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.631 [2024-10-01 15:59:04.473544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.631 [2024-10-01 15:59:04.484612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.631 [2024-10-01 15:59:04.484635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.631 [2024-10-01 15:59:04.484912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-10-01 15:59:04.484927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.631 [2024-10-01 15:59:04.484934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.631 [2024-10-01 15:59:04.485131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-10-01 15:59:04.485141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.631 [2024-10-01 15:59:04.485148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.631 [2024-10-01 15:59:04.485160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.631 [2024-10-01 15:59:04.485169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.631 [2024-10-01 15:59:04.485179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.631 [2024-10-01 15:59:04.485185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.631 [2024-10-01 15:59:04.485191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.631 [2024-10-01 15:59:04.485203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.631 [2024-10-01 15:59:04.485209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.631 [2024-10-01 15:59:04.485215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.631 [2024-10-01 15:59:04.485228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.631 [2024-10-01 15:59:04.485235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.631 [2024-10-01 15:59:04.496200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.631 [2024-10-01 15:59:04.496222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.631 [2024-10-01 15:59:04.496388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-10-01 15:59:04.496400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.631 [2024-10-01 15:59:04.496407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.631 [2024-10-01 15:59:04.496625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-10-01 15:59:04.496635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.631 [2024-10-01 15:59:04.496642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.631 [2024-10-01 15:59:04.496653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.631 [2024-10-01 15:59:04.496663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.631 [2024-10-01 15:59:04.496672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.631 [2024-10-01 15:59:04.496679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.631 [2024-10-01 15:59:04.496685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.631 [2024-10-01 15:59:04.496693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.631 [2024-10-01 15:59:04.496699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.631 [2024-10-01 15:59:04.496705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.631 [2024-10-01 15:59:04.496719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.631 [2024-10-01 15:59:04.496725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.631 [2024-10-01 15:59:04.507921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.631 [2024-10-01 15:59:04.507943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.631 [2024-10-01 15:59:04.508176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-10-01 15:59:04.508189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.631 [2024-10-01 15:59:04.508196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.631 [2024-10-01 15:59:04.508338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-10-01 15:59:04.508348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.631 [2024-10-01 15:59:04.508355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.631 [2024-10-01 15:59:04.508369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.631 [2024-10-01 15:59:04.508379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.631 [2024-10-01 15:59:04.508389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.631 [2024-10-01 15:59:04.508395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.631 [2024-10-01 15:59:04.508401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.631 [2024-10-01 15:59:04.508409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.631 [2024-10-01 15:59:04.508416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.631 [2024-10-01 15:59:04.508422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.631 [2024-10-01 15:59:04.508435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.631 [2024-10-01 15:59:04.508442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.631 [2024-10-01 15:59:04.519608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.631 [2024-10-01 15:59:04.519630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.631 [2024-10-01 15:59:04.519875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-10-01 15:59:04.519888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.631 [2024-10-01 15:59:04.519896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.631 [2024-10-01 15:59:04.520089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-10-01 15:59:04.520100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.631 [2024-10-01 15:59:04.520107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.631 [2024-10-01 15:59:04.520118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.631 [2024-10-01 15:59:04.520128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.631 [2024-10-01 15:59:04.520137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.631 [2024-10-01 15:59:04.520143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.631 [2024-10-01 15:59:04.520149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.631 [2024-10-01 15:59:04.520158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.631 [2024-10-01 15:59:04.520164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.631 [2024-10-01 15:59:04.520170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.631 [2024-10-01 15:59:04.520183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.631 [2024-10-01 15:59:04.520191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.631 [2024-10-01 15:59:04.532729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.631 [2024-10-01 15:59:04.532753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.631 [2024-10-01 15:59:04.533150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-10-01 15:59:04.533167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.631 [2024-10-01 15:59:04.533175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.631 [2024-10-01 15:59:04.533318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-10-01 15:59:04.533328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.631 [2024-10-01 15:59:04.533335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.631 [2024-10-01 15:59:04.533615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.631 [2024-10-01 15:59:04.533629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.631 [2024-10-01 15:59:04.533792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.631 [2024-10-01 15:59:04.533803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.631 [2024-10-01 15:59:04.533810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.631 [2024-10-01 15:59:04.533819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.631 [2024-10-01 15:59:04.533825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.632 [2024-10-01 15:59:04.533831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.632 [2024-10-01 15:59:04.533868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.632 [2024-10-01 15:59:04.533877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.632 [2024-10-01 15:59:04.543838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.632 [2024-10-01 15:59:04.543867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.632 [2024-10-01 15:59:04.544031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-10-01 15:59:04.544045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.632 [2024-10-01 15:59:04.544053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.632 [2024-10-01 15:59:04.544150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-10-01 15:59:04.544160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.632 [2024-10-01 15:59:04.544167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.632 [2024-10-01 15:59:04.544179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.632 [2024-10-01 15:59:04.544189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.632 [2024-10-01 15:59:04.544198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.632 [2024-10-01 15:59:04.544204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.632 [2024-10-01 15:59:04.544210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.632 [2024-10-01 15:59:04.544219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.632 [2024-10-01 15:59:04.544229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.632 [2024-10-01 15:59:04.544235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.632 [2024-10-01 15:59:04.544249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.632 [2024-10-01 15:59:04.544255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.632 [2024-10-01 15:59:04.555004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.632 [2024-10-01 15:59:04.555028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.632 [2024-10-01 15:59:04.555279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-10-01 15:59:04.555293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.632 [2024-10-01 15:59:04.555300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.632 [2024-10-01 15:59:04.555448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-10-01 15:59:04.555458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.632 [2024-10-01 15:59:04.555465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.632 [2024-10-01 15:59:04.555705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.632 [2024-10-01 15:59:04.555718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.632 [2024-10-01 15:59:04.555764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.632 [2024-10-01 15:59:04.555773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.632 [2024-10-01 15:59:04.555779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.632 [2024-10-01 15:59:04.555788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.632 [2024-10-01 15:59:04.555794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.632 [2024-10-01 15:59:04.555800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.632 [2024-10-01 15:59:04.555814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.632 [2024-10-01 15:59:04.555820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.632 [2024-10-01 15:59:04.565325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.632 [2024-10-01 15:59:04.565346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.632 [2024-10-01 15:59:04.565603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-10-01 15:59:04.565617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.632 [2024-10-01 15:59:04.565624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.632 [2024-10-01 15:59:04.565843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-10-01 15:59:04.565855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.632 [2024-10-01 15:59:04.565867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.632 [2024-10-01 15:59:04.566107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.632 [2024-10-01 15:59:04.566123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.632 [2024-10-01 15:59:04.566273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.632 [2024-10-01 15:59:04.566282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.632 [2024-10-01 15:59:04.566289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.632 [2024-10-01 15:59:04.566298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.632 [2024-10-01 15:59:04.566304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.632 [2024-10-01 15:59:04.566310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.632 [2024-10-01 15:59:04.566340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.632 [2024-10-01 15:59:04.566347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.632 [2024-10-01 15:59:04.577014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.632 [2024-10-01 15:59:04.577035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.632 [2024-10-01 15:59:04.577201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-10-01 15:59:04.577214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.632 [2024-10-01 15:59:04.577221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.632 [2024-10-01 15:59:04.577442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-10-01 15:59:04.577452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.632 [2024-10-01 15:59:04.577459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.632 [2024-10-01 15:59:04.577470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.632 [2024-10-01 15:59:04.577479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.632 [2024-10-01 15:59:04.577496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.632 [2024-10-01 15:59:04.577504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.632 [2024-10-01 15:59:04.577511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.632 [2024-10-01 15:59:04.577519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.632 [2024-10-01 15:59:04.577525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.632 [2024-10-01 15:59:04.577532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.632 [2024-10-01 15:59:04.577545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.632 [2024-10-01 15:59:04.577552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.632 [2024-10-01 15:59:04.589447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.632 [2024-10-01 15:59:04.589469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.632 [2024-10-01 15:59:04.589705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-10-01 15:59:04.589722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.632 [2024-10-01 15:59:04.589731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.632 [2024-10-01 15:59:04.589871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-10-01 15:59:04.589882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.632 [2024-10-01 15:59:04.589889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.632 [2024-10-01 15:59:04.589900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.632 [2024-10-01 15:59:04.589910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.632 [2024-10-01 15:59:04.589928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.632 [2024-10-01 15:59:04.589935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.632 [2024-10-01 15:59:04.589941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.632 [2024-10-01 15:59:04.589951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.632 [2024-10-01 15:59:04.589957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.632 [2024-10-01 15:59:04.589963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.632 [2024-10-01 15:59:04.589976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.632 [2024-10-01 15:59:04.589983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.632 [2024-10-01 15:59:04.600882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.632 [2024-10-01 15:59:04.600904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.632 [2024-10-01 15:59:04.601074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-10-01 15:59:04.601086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.633 [2024-10-01 15:59:04.601093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.633 [2024-10-01 15:59:04.601241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-10-01 15:59:04.601250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.633 [2024-10-01 15:59:04.601257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.633 [2024-10-01 15:59:04.601268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.633 [2024-10-01 15:59:04.601277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.633 [2024-10-01 15:59:04.601287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.633 [2024-10-01 15:59:04.601294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.633 [2024-10-01 15:59:04.601300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.633 [2024-10-01 15:59:04.601308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.633 [2024-10-01 15:59:04.601314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.633 [2024-10-01 15:59:04.601324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.633 [2024-10-01 15:59:04.601338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.633 [2024-10-01 15:59:04.601344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.633 [2024-10-01 15:59:04.612008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.633 [2024-10-01 15:59:04.612029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.633 [2024-10-01 15:59:04.612159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-10-01 15:59:04.612171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.633 [2024-10-01 15:59:04.612178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.633 [2024-10-01 15:59:04.612266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-10-01 15:59:04.612276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.633 [2024-10-01 15:59:04.612283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.633 [2024-10-01 15:59:04.612574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.633 [2024-10-01 15:59:04.612587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.633 [2024-10-01 15:59:04.612737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.633 [2024-10-01 15:59:04.612747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.633 [2024-10-01 15:59:04.612754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.633 [2024-10-01 15:59:04.612763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.633 [2024-10-01 15:59:04.612769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.633 [2024-10-01 15:59:04.612775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.633 [2024-10-01 15:59:04.613131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.633 [2024-10-01 15:59:04.613143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.633 [2024-10-01 15:59:04.622527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.633 [2024-10-01 15:59:04.622548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.633 [2024-10-01 15:59:04.622780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-10-01 15:59:04.622792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.633 [2024-10-01 15:59:04.622800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.633 [2024-10-01 15:59:04.622877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-10-01 15:59:04.622888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.633 [2024-10-01 15:59:04.622894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.633 [2024-10-01 15:59:04.622905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.633 [2024-10-01 15:59:04.622914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.633 [2024-10-01 15:59:04.622927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.633 [2024-10-01 15:59:04.622933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.633 [2024-10-01 15:59:04.622940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.633 [2024-10-01 15:59:04.622948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.633 [2024-10-01 15:59:04.622953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.633 [2024-10-01 15:59:04.622960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.633 [2024-10-01 15:59:04.622973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.633 [2024-10-01 15:59:04.622980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.633 [2024-10-01 15:59:04.634391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.633 [2024-10-01 15:59:04.634414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.633 [2024-10-01 15:59:04.634683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-10-01 15:59:04.634697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.633 [2024-10-01 15:59:04.634704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.633 [2024-10-01 15:59:04.634919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-10-01 15:59:04.634930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.633 [2024-10-01 15:59:04.634937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.633 [2024-10-01 15:59:04.634948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.633 [2024-10-01 15:59:04.634957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.633 [2024-10-01 15:59:04.634975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.633 [2024-10-01 15:59:04.634982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.633 [2024-10-01 15:59:04.634989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.633 [2024-10-01 15:59:04.634997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.633 [2024-10-01 15:59:04.635004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.633 [2024-10-01 15:59:04.635010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.633 [2024-10-01 15:59:04.635023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.633 [2024-10-01 15:59:04.635029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.633 [2024-10-01 15:59:04.646400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.633 [2024-10-01 15:59:04.646422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.633 [2024-10-01 15:59:04.646832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-10-01 15:59:04.646849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.633 [2024-10-01 15:59:04.646860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.633 [2024-10-01 15:59:04.646990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-10-01 15:59:04.647001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.633 [2024-10-01 15:59:04.647008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.633 [2024-10-01 15:59:04.647039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.633 [2024-10-01 15:59:04.647049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.633 [2024-10-01 15:59:04.647059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.633 [2024-10-01 15:59:04.647065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.633 [2024-10-01 15:59:04.647071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.633 [2024-10-01 15:59:04.647080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.633 [2024-10-01 15:59:04.647088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.633 [2024-10-01 15:59:04.647094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.633 [2024-10-01 15:59:04.647108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.633 [2024-10-01 15:59:04.647114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.633 [2024-10-01 15:59:04.656697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.633 [2024-10-01 15:59:04.656717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.633 [2024-10-01 15:59:04.656878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-10-01 15:59:04.656891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.633 [2024-10-01 15:59:04.656898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.633 [2024-10-01 15:59:04.657045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-10-01 15:59:04.657055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.633 [2024-10-01 15:59:04.657062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.633 [2024-10-01 15:59:04.658030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.633 [2024-10-01 15:59:04.658046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.633 [2024-10-01 15:59:04.658057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.633 [2024-10-01 15:59:04.658064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.633 [2024-10-01 15:59:04.658071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.633 [2024-10-01 15:59:04.658080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.634 [2024-10-01 15:59:04.658086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.634 [2024-10-01 15:59:04.658092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.634 [2024-10-01 15:59:04.658109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.634 [2024-10-01 15:59:04.658115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.634 [2024-10-01 15:59:04.668347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.634 [2024-10-01 15:59:04.668368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.634 [2024-10-01 15:59:04.668544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-10-01 15:59:04.668556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.634 [2024-10-01 15:59:04.668564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.634 [2024-10-01 15:59:04.668720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-10-01 15:59:04.668730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.634 [2024-10-01 15:59:04.668737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.634 [2024-10-01 15:59:04.668748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.634 [2024-10-01 15:59:04.668757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.634 [2024-10-01 15:59:04.668767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.634 [2024-10-01 15:59:04.668773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.634 [2024-10-01 15:59:04.668779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.634 [2024-10-01 15:59:04.668788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.634 [2024-10-01 15:59:04.668794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.634 [2024-10-01 15:59:04.668800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.634 [2024-10-01 15:59:04.668813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.634 [2024-10-01 15:59:04.668820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.634 [2024-10-01 15:59:04.680125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.634 [2024-10-01 15:59:04.680147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.634 [2024-10-01 15:59:04.680451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-10-01 15:59:04.680467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.634 [2024-10-01 15:59:04.680475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.634 [2024-10-01 15:59:04.680685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-10-01 15:59:04.680696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.634 [2024-10-01 15:59:04.680703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.634 [2024-10-01 15:59:04.680847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.634 [2024-10-01 15:59:04.680860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.634 [2024-10-01 15:59:04.680898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.634 [2024-10-01 15:59:04.680910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.634 [2024-10-01 15:59:04.680916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.634 [2024-10-01 15:59:04.680926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.634 [2024-10-01 15:59:04.680931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.634 [2024-10-01 15:59:04.680938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.634 [2024-10-01 15:59:04.680951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.634 [2024-10-01 15:59:04.680957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.634 [2024-10-01 15:59:04.691873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.634 [2024-10-01 15:59:04.691895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.634 [2024-10-01 15:59:04.692350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-10-01 15:59:04.692367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.634 [2024-10-01 15:59:04.692375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.634 [2024-10-01 15:59:04.692623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-10-01 15:59:04.692634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.634 [2024-10-01 15:59:04.692640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.634 [2024-10-01 15:59:04.692798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.634 [2024-10-01 15:59:04.692811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.634 [2024-10-01 15:59:04.692838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.634 [2024-10-01 15:59:04.692845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.634 [2024-10-01 15:59:04.692852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.634 [2024-10-01 15:59:04.692861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.634 [2024-10-01 15:59:04.692873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.634 [2024-10-01 15:59:04.692879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.634 [2024-10-01 15:59:04.693073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.634 [2024-10-01 15:59:04.693082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.634 [2024-10-01 15:59:04.702839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.634 [2024-10-01 15:59:04.702861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.634 [2024-10-01 15:59:04.703007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-10-01 15:59:04.703020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.634 [2024-10-01 15:59:04.703027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.634 [2024-10-01 15:59:04.703244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-10-01 15:59:04.703254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.634 [2024-10-01 15:59:04.703260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.634 [2024-10-01 15:59:04.703391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.634 [2024-10-01 15:59:04.703403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.634 [2024-10-01 15:59:04.703557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.634 [2024-10-01 15:59:04.703568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.634 [2024-10-01 15:59:04.703575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.634 [2024-10-01 15:59:04.703584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.634 [2024-10-01 15:59:04.703590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.634 [2024-10-01 15:59:04.703596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.634 [2024-10-01 15:59:04.703631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.634 [2024-10-01 15:59:04.703640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.634 [2024-10-01 15:59:04.712927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.634 [2024-10-01 15:59:04.712956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.634 [2024-10-01 15:59:04.713189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-10-01 15:59:04.713202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.634 [2024-10-01 15:59:04.713210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.634 [2024-10-01 15:59:04.713790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-10-01 15:59:04.713806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.634 [2024-10-01 15:59:04.713814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.634 [2024-10-01 15:59:04.713823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.634 [2024-10-01 15:59:04.714207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.634 [2024-10-01 15:59:04.714221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.634 [2024-10-01 15:59:04.714227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.634 [2024-10-01 15:59:04.714234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.634 [2024-10-01 15:59:04.714391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.634 [2024-10-01 15:59:04.714401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.634 [2024-10-01 15:59:04.714407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.635 [2024-10-01 15:59:04.714413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.635 [2024-10-01 15:59:04.714447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.635 [2024-10-01 15:59:04.725022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.635 [2024-10-01 15:59:04.725045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.635 [2024-10-01 15:59:04.725410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-10-01 15:59:04.725426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.635 [2024-10-01 15:59:04.725434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.635 [2024-10-01 15:59:04.725684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-10-01 15:59:04.725695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.635 [2024-10-01 15:59:04.725702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.635 [2024-10-01 15:59:04.725959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.635 [2024-10-01 15:59:04.725973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.635 [2024-10-01 15:59:04.726018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.635 [2024-10-01 15:59:04.726026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.635 [2024-10-01 15:59:04.726032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.635 [2024-10-01 15:59:04.726041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.635 [2024-10-01 15:59:04.726047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.635 [2024-10-01 15:59:04.726054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.635 [2024-10-01 15:59:04.726067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.635 [2024-10-01 15:59:04.726074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.635 [2024-10-01 15:59:04.735748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.635 [2024-10-01 15:59:04.735770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.635 [2024-10-01 15:59:04.736023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-10-01 15:59:04.736039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.635 [2024-10-01 15:59:04.736047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.635 [2024-10-01 15:59:04.736241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-10-01 15:59:04.736252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.635 [2024-10-01 15:59:04.736259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.635 [2024-10-01 15:59:04.736404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.635 [2024-10-01 15:59:04.736417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.635 [2024-10-01 15:59:04.736555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.635 [2024-10-01 15:59:04.736565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.635 [2024-10-01 15:59:04.736575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.635 [2024-10-01 15:59:04.736585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.635 [2024-10-01 15:59:04.736591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.635 [2024-10-01 15:59:04.736597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.635 [2024-10-01 15:59:04.736626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.635 [2024-10-01 15:59:04.736634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.635 [2024-10-01 15:59:04.746751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.635 [2024-10-01 15:59:04.746772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.635 [2024-10-01 15:59:04.746930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-10-01 15:59:04.746944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.635 [2024-10-01 15:59:04.746951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.635 [2024-10-01 15:59:04.747098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-10-01 15:59:04.747108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.635 [2024-10-01 15:59:04.747115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.635 [2024-10-01 15:59:04.747127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.635 [2024-10-01 15:59:04.747137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.635 [2024-10-01 15:59:04.747147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.635 [2024-10-01 15:59:04.747153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.635 [2024-10-01 15:59:04.747159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.635 [2024-10-01 15:59:04.747167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.635 [2024-10-01 15:59:04.747173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.635 [2024-10-01 15:59:04.747179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.635 [2024-10-01 15:59:04.747193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.635 [2024-10-01 15:59:04.747199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.635 [2024-10-01 15:59:04.757910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.635 [2024-10-01 15:59:04.757932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.635 [2024-10-01 15:59:04.758129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-10-01 15:59:04.758142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.635 [2024-10-01 15:59:04.758150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.635 [2024-10-01 15:59:04.758319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-10-01 15:59:04.758331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.635 [2024-10-01 15:59:04.758338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.635 [2024-10-01 15:59:04.758350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.635 [2024-10-01 15:59:04.758358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.635 [2024-10-01 15:59:04.758368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.635 [2024-10-01 15:59:04.758374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.635 [2024-10-01 15:59:04.758381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.635 [2024-10-01 15:59:04.758389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.635 [2024-10-01 15:59:04.758395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.635 [2024-10-01 15:59:04.758400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.635 [2024-10-01 15:59:04.758414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.635 [2024-10-01 15:59:04.758420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.635 [2024-10-01 15:59:04.768935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.635 [2024-10-01 15:59:04.768957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.635 [2024-10-01 15:59:04.769120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-10-01 15:59:04.769132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.635 [2024-10-01 15:59:04.769140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.635 [2024-10-01 15:59:04.769235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-10-01 15:59:04.769245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.635 [2024-10-01 15:59:04.769252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.635 [2024-10-01 15:59:04.769383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.635 [2024-10-01 15:59:04.769394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.635 [2024-10-01 15:59:04.769532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.635 [2024-10-01 15:59:04.769542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.635 [2024-10-01 15:59:04.769549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.635 [2024-10-01 15:59:04.769557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.635 [2024-10-01 15:59:04.769563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.635 [2024-10-01 15:59:04.769569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.635 [2024-10-01 15:59:04.769598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.635 [2024-10-01 15:59:04.769606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.635 [2024-10-01 15:59:04.779400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.635 [2024-10-01 15:59:04.779424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.635 [2024-10-01 15:59:04.779603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-10-01 15:59:04.779615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.636 [2024-10-01 15:59:04.779623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.636 [2024-10-01 15:59:04.779883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-10-01 15:59:04.779894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.636 [2024-10-01 15:59:04.779902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.636 [2024-10-01 15:59:04.779913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.636 [2024-10-01 15:59:04.779923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.636 [2024-10-01 15:59:04.779933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.636 [2024-10-01 15:59:04.779939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.636 [2024-10-01 15:59:04.779945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.636 [2024-10-01 15:59:04.779954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.636 [2024-10-01 15:59:04.779960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.636 [2024-10-01 15:59:04.779966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.636 [2024-10-01 15:59:04.779979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.636 [2024-10-01 15:59:04.779985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.636 [2024-10-01 15:59:04.792076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.636 [2024-10-01 15:59:04.792097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.636 [2024-10-01 15:59:04.792282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-10-01 15:59:04.792295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.636 [2024-10-01 15:59:04.792302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.636 [2024-10-01 15:59:04.792398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-10-01 15:59:04.792408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.636 [2024-10-01 15:59:04.792415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.636 [2024-10-01 15:59:04.792426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.636 [2024-10-01 15:59:04.792435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.636 [2024-10-01 15:59:04.792445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.636 [2024-10-01 15:59:04.792451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.636 [2024-10-01 15:59:04.792457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.636 [2024-10-01 15:59:04.792469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.636 [2024-10-01 15:59:04.792475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.636 [2024-10-01 15:59:04.792481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.636 [2024-10-01 15:59:04.792495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.636 [2024-10-01 15:59:04.792503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.636 [2024-10-01 15:59:04.804140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.636 [2024-10-01 15:59:04.804161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.636 [2024-10-01 15:59:04.804351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-10-01 15:59:04.804362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.636 [2024-10-01 15:59:04.804370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.636 [2024-10-01 15:59:04.804582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-10-01 15:59:04.804593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.636 [2024-10-01 15:59:04.804600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.636 [2024-10-01 15:59:04.804851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.636 [2024-10-01 15:59:04.804871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.636 [2024-10-01 15:59:04.805119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.636 [2024-10-01 15:59:04.805129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.636 [2024-10-01 15:59:04.805136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.636 [2024-10-01 15:59:04.805145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.636 [2024-10-01 15:59:04.805152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.636 [2024-10-01 15:59:04.805158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.636 [2024-10-01 15:59:04.805307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.636 [2024-10-01 15:59:04.805316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.636 [2024-10-01 15:59:04.815183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.636 [2024-10-01 15:59:04.815204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.636 [2024-10-01 15:59:04.815436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-10-01 15:59:04.815449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.636 [2024-10-01 15:59:04.815456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.636 [2024-10-01 15:59:04.815696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-10-01 15:59:04.815706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.636 [2024-10-01 15:59:04.815720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.636 [2024-10-01 15:59:04.815732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.636 [2024-10-01 15:59:04.815741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.636 [2024-10-01 15:59:04.815750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.636 [2024-10-01 15:59:04.815756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.636 [2024-10-01 15:59:04.815762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.636 [2024-10-01 15:59:04.815770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.636 [2024-10-01 15:59:04.815776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.636 [2024-10-01 15:59:04.815783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.636 [2024-10-01 15:59:04.815796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.636 [2024-10-01 15:59:04.815802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.636 [2024-10-01 15:59:04.826641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.636 [2024-10-01 15:59:04.826665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.636 [2024-10-01 15:59:04.827075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-10-01 15:59:04.827093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.636 [2024-10-01 15:59:04.827101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.636 [2024-10-01 15:59:04.827246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-10-01 15:59:04.827256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.636 [2024-10-01 15:59:04.827263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.636 [2024-10-01 15:59:04.827516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.636 [2024-10-01 15:59:04.827530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.636 [2024-10-01 15:59:04.827678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.636 [2024-10-01 15:59:04.827688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.636 [2024-10-01 15:59:04.827696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.636 [2024-10-01 15:59:04.827706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.636 [2024-10-01 15:59:04.827712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.636 [2024-10-01 15:59:04.827718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.636 [2024-10-01 15:59:04.827748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.636 [2024-10-01 15:59:04.827755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.636 [2024-10-01 15:59:04.839087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.636 [2024-10-01 15:59:04.839108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.636 [2024-10-01 15:59:04.839277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-10-01 15:59:04.839289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.636 [2024-10-01 15:59:04.839296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.636 [2024-10-01 15:59:04.839467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-10-01 15:59:04.839477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.636 [2024-10-01 15:59:04.839483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.636 [2024-10-01 15:59:04.839495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.636 [2024-10-01 15:59:04.839504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.636 [2024-10-01 15:59:04.839515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.636 [2024-10-01 15:59:04.839521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.636 [2024-10-01 15:59:04.839527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.636 [2024-10-01 15:59:04.839536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.636 [2024-10-01 15:59:04.839541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.636 [2024-10-01 15:59:04.839547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.636 [2024-10-01 15:59:04.839561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.636 [2024-10-01 15:59:04.839568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.636 [2024-10-01 15:59:04.850610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.636 [2024-10-01 15:59:04.850633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.636 [2024-10-01 15:59:04.850895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-10-01 15:59:04.850910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.636 [2024-10-01 15:59:04.850919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.636 [2024-10-01 15:59:04.851113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-10-01 15:59:04.851124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.637 [2024-10-01 15:59:04.851132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.637 [2024-10-01 15:59:04.851244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.637 [2024-10-01 15:59:04.851258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.637 [2024-10-01 15:59:04.851418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.637 [2024-10-01 15:59:04.851431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.637 [2024-10-01 15:59:04.851438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.637 [2024-10-01 15:59:04.851448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.637 [2024-10-01 15:59:04.851459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.637 [2024-10-01 15:59:04.851465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.637 [2024-10-01 15:59:04.851494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.637 [2024-10-01 15:59:04.851502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.637 [2024-10-01 15:59:04.861624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.637 [2024-10-01 15:59:04.861646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.637 [2024-10-01 15:59:04.861894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-10-01 15:59:04.861909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.637 [2024-10-01 15:59:04.861917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.637 [2024-10-01 15:59:04.862017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-10-01 15:59:04.862029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.637 [2024-10-01 15:59:04.862035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.637 [2024-10-01 15:59:04.862047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.637 [2024-10-01 15:59:04.862057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.637 [2024-10-01 15:59:04.862067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.637 [2024-10-01 15:59:04.862073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.637 [2024-10-01 15:59:04.862080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.637 [2024-10-01 15:59:04.862089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.637 [2024-10-01 15:59:04.862095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.637 [2024-10-01 15:59:04.862101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.637 [2024-10-01 15:59:04.862116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.637 [2024-10-01 15:59:04.862123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.637 [2024-10-01 15:59:04.871793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.637 [2024-10-01 15:59:04.871816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.637 [2024-10-01 15:59:04.871993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-10-01 15:59:04.872008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.637 [2024-10-01 15:59:04.872016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.637 [2024-10-01 15:59:04.872163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-10-01 15:59:04.872174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.637 [2024-10-01 15:59:04.872181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.637 [2024-10-01 15:59:04.872197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.637 [2024-10-01 15:59:04.872207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.637 [2024-10-01 15:59:04.872217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.637 [2024-10-01 15:59:04.872224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.637 [2024-10-01 15:59:04.872230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.637 [2024-10-01 15:59:04.872239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.637 [2024-10-01 15:59:04.872245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.637 [2024-10-01 15:59:04.872252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.637 [2024-10-01 15:59:04.872265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.637 [2024-10-01 15:59:04.872272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.637 [2024-10-01 15:59:04.883002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.637 [2024-10-01 15:59:04.883025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.637 [2024-10-01 15:59:04.883196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-10-01 15:59:04.883209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.637 [2024-10-01 15:59:04.883217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.637 [2024-10-01 15:59:04.883317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-10-01 15:59:04.883328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.637 [2024-10-01 15:59:04.883336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.637 [2024-10-01 15:59:04.883347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.637 [2024-10-01 15:59:04.883356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.637 [2024-10-01 15:59:04.883374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.637 [2024-10-01 15:59:04.883381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.637 [2024-10-01 15:59:04.883389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.637 [2024-10-01 15:59:04.883399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.637 [2024-10-01 15:59:04.883405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.637 [2024-10-01 15:59:04.883411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.637 [2024-10-01 15:59:04.883426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.637 [2024-10-01 15:59:04.883433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.637 [2024-10-01 15:59:04.893538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.637 [2024-10-01 15:59:04.893561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.637 [2024-10-01 15:59:04.893715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-10-01 15:59:04.893732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.637 [2024-10-01 15:59:04.893740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.637 [2024-10-01 15:59:04.893936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-10-01 15:59:04.893948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.637 [2024-10-01 15:59:04.893955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.637 [2024-10-01 15:59:04.893966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.637 [2024-10-01 15:59:04.893976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.637 [2024-10-01 15:59:04.893987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.637 [2024-10-01 15:59:04.893994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.637 [2024-10-01 15:59:04.894000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.637 [2024-10-01 15:59:04.894009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.637 [2024-10-01 15:59:04.894015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.637 [2024-10-01 15:59:04.894022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.637 [2024-10-01 15:59:04.894037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.637 [2024-10-01 15:59:04.894045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.637 [2024-10-01 15:59:04.905066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.637 [2024-10-01 15:59:04.905087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.637 [2024-10-01 15:59:04.905204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-10-01 15:59:04.905218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.637 [2024-10-01 15:59:04.905226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.637 [2024-10-01 15:59:04.905322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-10-01 15:59:04.905332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.637 [2024-10-01 15:59:04.905340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.637 [2024-10-01 15:59:04.905351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.638 [2024-10-01 15:59:04.905361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.638 [2024-10-01 15:59:04.905371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.638 [2024-10-01 15:59:04.905377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.638 [2024-10-01 15:59:04.905385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.638 [2024-10-01 15:59:04.905394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.638 [2024-10-01 15:59:04.905400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.638 [2024-10-01 15:59:04.905409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.638 [2024-10-01 15:59:04.905423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.638 [2024-10-01 15:59:04.905429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.638 [2024-10-01 15:59:04.916904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.638 [2024-10-01 15:59:04.916928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.638 [2024-10-01 15:59:04.917835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-10-01 15:59:04.917855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.638 [2024-10-01 15:59:04.917868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.638 [2024-10-01 15:59:04.917949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-10-01 15:59:04.917959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.638 [2024-10-01 15:59:04.917966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.638 [2024-10-01 15:59:04.918515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.638 [2024-10-01 15:59:04.918532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.638 [2024-10-01 15:59:04.918715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.638 [2024-10-01 15:59:04.918727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.638 [2024-10-01 15:59:04.918734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.638 [2024-10-01 15:59:04.918745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.638 [2024-10-01 15:59:04.918751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.638 [2024-10-01 15:59:04.918758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.638 [2024-10-01 15:59:04.918789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.638 [2024-10-01 15:59:04.918797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.638 [2024-10-01 15:59:04.928719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.638 [2024-10-01 15:59:04.928742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.638 [2024-10-01 15:59:04.929076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-10-01 15:59:04.929094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.638 [2024-10-01 15:59:04.929102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.638 [2024-10-01 15:59:04.929260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-10-01 15:59:04.929270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.638 [2024-10-01 15:59:04.929277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.638 [2024-10-01 15:59:04.929420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.638 [2024-10-01 15:59:04.929438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.638 [2024-10-01 15:59:04.929575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.638 [2024-10-01 15:59:04.929588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.638 [2024-10-01 15:59:04.929594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.638 [2024-10-01 15:59:04.929605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.638 [2024-10-01 15:59:04.929612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.638 [2024-10-01 15:59:04.929618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.638 [2024-10-01 15:59:04.929648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.638 [2024-10-01 15:59:04.929656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.638 [2024-10-01 15:59:04.939639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.638 [2024-10-01 15:59:04.939661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.638 [2024-10-01 15:59:04.939981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-10-01 15:59:04.939999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.638 [2024-10-01 15:59:04.940008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.638 [2024-10-01 15:59:04.940097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-10-01 15:59:04.940108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.638 [2024-10-01 15:59:04.940116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.638 [2024-10-01 15:59:04.940260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.638 [2024-10-01 15:59:04.940273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.638 [2024-10-01 15:59:04.940297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.638 [2024-10-01 15:59:04.940304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.638 [2024-10-01 15:59:04.940311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.638 [2024-10-01 15:59:04.940322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.638 [2024-10-01 15:59:04.940328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.638 [2024-10-01 15:59:04.940334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.638 [2024-10-01 15:59:04.940347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.638 [2024-10-01 15:59:04.940354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.638 [2024-10-01 15:59:04.950870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.638 [2024-10-01 15:59:04.950892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.638 [2024-10-01 15:59:04.951078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-10-01 15:59:04.951092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.638 [2024-10-01 15:59:04.951104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.638 [2024-10-01 15:59:04.951201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-10-01 15:59:04.951212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.638 [2024-10-01 15:59:04.951219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.638 [2024-10-01 15:59:04.951231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.638 [2024-10-01 15:59:04.951240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.638 [2024-10-01 15:59:04.951250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.638 [2024-10-01 15:59:04.951257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.638 [2024-10-01 15:59:04.951264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.638 [2024-10-01 15:59:04.951272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.638 [2024-10-01 15:59:04.951279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.638 [2024-10-01 15:59:04.951285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.639 [2024-10-01 15:59:04.951299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.639 [2024-10-01 15:59:04.951306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.639 [2024-10-01 15:59:04.963212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.639 [2024-10-01 15:59:04.963235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.639 [2024-10-01 15:59:04.963397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-10-01 15:59:04.963411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.639 [2024-10-01 15:59:04.963418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.639 [2024-10-01 15:59:04.963578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-10-01 15:59:04.963590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.639 [2024-10-01 15:59:04.963597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.639 [2024-10-01 15:59:04.963616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.639 [2024-10-01 15:59:04.963628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.639 [2024-10-01 15:59:04.963639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.639 [2024-10-01 15:59:04.963645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.639 [2024-10-01 15:59:04.963652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.639 [2024-10-01 15:59:04.963661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.639 [2024-10-01 15:59:04.963667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.639 [2024-10-01 15:59:04.963674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.639 [2024-10-01 15:59:04.963692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.639 [2024-10-01 15:59:04.963699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.639 [2024-10-01 15:59:04.973921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.639 [2024-10-01 15:59:04.973943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.639 [2024-10-01 15:59:04.974106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-10-01 15:59:04.974118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.639 [2024-10-01 15:59:04.974126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.639 [2024-10-01 15:59:04.974345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-10-01 15:59:04.974356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.639 [2024-10-01 15:59:04.974364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.639 [2024-10-01 15:59:04.974376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.639 [2024-10-01 15:59:04.974386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.639 [2024-10-01 15:59:04.974396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.639 [2024-10-01 15:59:04.974403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.639 [2024-10-01 15:59:04.974409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.639 [2024-10-01 15:59:04.974418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.639 [2024-10-01 15:59:04.974424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.639 [2024-10-01 15:59:04.974432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.639 [2024-10-01 15:59:04.974445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.639 [2024-10-01 15:59:04.974453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.639 [2024-10-01 15:59:04.986577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.639 [2024-10-01 15:59:04.986600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.639 [2024-10-01 15:59:04.986895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-10-01 15:59:04.986913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.639 [2024-10-01 15:59:04.986921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.639 [2024-10-01 15:59:04.987088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-10-01 15:59:04.987099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.639 [2024-10-01 15:59:04.987106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.639 [2024-10-01 15:59:04.987458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.639 [2024-10-01 15:59:04.987474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.639 [2024-10-01 15:59:04.987631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.639 [2024-10-01 15:59:04.987644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.639 [2024-10-01 15:59:04.987651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.639 [2024-10-01 15:59:04.987660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.639 [2024-10-01 15:59:04.987667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.639 [2024-10-01 15:59:04.987674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.639 [2024-10-01 15:59:04.987816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.639 [2024-10-01 15:59:04.987827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.639 [2024-10-01 15:59:04.998219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.639 [2024-10-01 15:59:04.998241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.639 [2024-10-01 15:59:04.998618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-10-01 15:59:04.998635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.639 [2024-10-01 15:59:04.998645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.639 [2024-10-01 15:59:04.998824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-10-01 15:59:04.998836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.639 [2024-10-01 15:59:04.998843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.639 [2024-10-01 15:59:04.999099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.639 [2024-10-01 15:59:04.999114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.639 [2024-10-01 15:59:04.999151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.639 [2024-10-01 15:59:04.999160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.639 [2024-10-01 15:59:04.999167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.639 [2024-10-01 15:59:04.999176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.639 [2024-10-01 15:59:04.999182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.639 [2024-10-01 15:59:04.999189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.639 [2024-10-01 15:59:04.999203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.639 [2024-10-01 15:59:04.999210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.639 11370.69 IOPS, 44.42 MiB/s [2024-10-01 15:59:05.010051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.639 [2024-10-01 15:59:05.010074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.639 [2024-10-01 15:59:05.010291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-10-01 15:59:05.010306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.639 [2024-10-01 15:59:05.010314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.639 [2024-10-01 15:59:05.010535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-10-01 15:59:05.010548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.639 [2024-10-01 15:59:05.010555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.639 [2024-10-01 15:59:05.010795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.639 [2024-10-01 15:59:05.010810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.639 [2024-10-01 15:59:05.010848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.639 [2024-10-01 15:59:05.010857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.639 [2024-10-01 15:59:05.010870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.639 [2024-10-01 15:59:05.010879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.639 [2024-10-01 15:59:05.010886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.639 [2024-10-01 15:59:05.010893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.639 [2024-10-01 15:59:05.010907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.639 [2024-10-01 15:59:05.010915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.639 [2024-10-01 15:59:05.022702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.640 [2024-10-01 15:59:05.022724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.640 [2024-10-01 15:59:05.023060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-10-01 15:59:05.023078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.640 [2024-10-01 15:59:05.023086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.640 [2024-10-01 15:59:05.023212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-10-01 15:59:05.023223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.640 [2024-10-01 15:59:05.023230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.640 [2024-10-01 15:59:05.023512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.640 [2024-10-01 15:59:05.023528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.640 [2024-10-01 15:59:05.023679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.640 [2024-10-01 15:59:05.023690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.640 [2024-10-01 15:59:05.023698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.640 [2024-10-01 15:59:05.023707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.640 [2024-10-01 15:59:05.023714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.640 [2024-10-01 15:59:05.023721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.640 [2024-10-01 15:59:05.023753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.640 [2024-10-01 15:59:05.023764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.640 [2024-10-01 15:59:05.033746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.640 [2024-10-01 15:59:05.033768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.640 [2024-10-01 15:59:05.034104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-10-01 15:59:05.034122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.640 [2024-10-01 15:59:05.034130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.640 [2024-10-01 15:59:05.034296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-10-01 15:59:05.034307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.640 [2024-10-01 15:59:05.034314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.640 [2024-10-01 15:59:05.034596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.640 [2024-10-01 15:59:05.034613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.640 [2024-10-01 15:59:05.034764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.640 [2024-10-01 15:59:05.034776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.640 [2024-10-01 15:59:05.034783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.640 [2024-10-01 15:59:05.034793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.640 [2024-10-01 15:59:05.034800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.640 [2024-10-01 15:59:05.034806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.640 [2024-10-01 15:59:05.034838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.640 [2024-10-01 15:59:05.034845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.640 [2024-10-01 15:59:05.045110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.640 [2024-10-01 15:59:05.045133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.640 [2024-10-01 15:59:05.045539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-10-01 15:59:05.045557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.640 [2024-10-01 15:59:05.045565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.640 [2024-10-01 15:59:05.045657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-10-01 15:59:05.045669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.640 [2024-10-01 15:59:05.045676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.640 [2024-10-01 15:59:05.045832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.640 [2024-10-01 15:59:05.045846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.640 [2024-10-01 15:59:05.045878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.640 [2024-10-01 15:59:05.045890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.640 [2024-10-01 15:59:05.045897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.640 [2024-10-01 15:59:05.045907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.640 [2024-10-01 15:59:05.045913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.640 [2024-10-01 15:59:05.045919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.640 [2024-10-01 15:59:05.045933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.640 [2024-10-01 15:59:05.045941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.640 [2024-10-01 15:59:05.056664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.640 [2024-10-01 15:59:05.056686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.640 [2024-10-01 15:59:05.056993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-10-01 15:59:05.057012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.640 [2024-10-01 15:59:05.057020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.640 [2024-10-01 15:59:05.057185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-10-01 15:59:05.057196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.640 [2024-10-01 15:59:05.057203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.640 [2024-10-01 15:59:05.057233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.640 [2024-10-01 15:59:05.057244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.640 [2024-10-01 15:59:05.057254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.640 [2024-10-01 15:59:05.057261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.640 [2024-10-01 15:59:05.057268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.640 [2024-10-01 15:59:05.057278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.640 [2024-10-01 15:59:05.057284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.640 [2024-10-01 15:59:05.057291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.640 [2024-10-01 15:59:05.057304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.640 [2024-10-01 15:59:05.057311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.640 [2024-10-01 15:59:05.067086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.640 [2024-10-01 15:59:05.067109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.640 [2024-10-01 15:59:05.067276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-10-01 15:59:05.067290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.640 [2024-10-01 15:59:05.067298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.640 [2024-10-01 15:59:05.067458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-10-01 15:59:05.067473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.640 [2024-10-01 15:59:05.067482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.640 [2024-10-01 15:59:05.067938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.640 [2024-10-01 15:59:05.067955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.640 [2024-10-01 15:59:05.068124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.640 [2024-10-01 15:59:05.068136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.640 [2024-10-01 15:59:05.068143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.640 [2024-10-01 15:59:05.068154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.640 [2024-10-01 15:59:05.068160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.640 [2024-10-01 15:59:05.068168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.640 [2024-10-01 15:59:05.068199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.640 [2024-10-01 15:59:05.068207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.640 [2024-10-01 15:59:05.079456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.640 [2024-10-01 15:59:05.079480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.640 [2024-10-01 15:59:05.079856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-10-01 15:59:05.079881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.641 [2024-10-01 15:59:05.079890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.641 [2024-10-01 15:59:05.080043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-10-01 15:59:05.080054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.641 [2024-10-01 15:59:05.080062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.641 [2024-10-01 15:59:05.080092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.641 [2024-10-01 15:59:05.080103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.641 [2024-10-01 15:59:05.080121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.641 [2024-10-01 15:59:05.080130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.641 [2024-10-01 15:59:05.080138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.641 [2024-10-01 15:59:05.080147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.641 [2024-10-01 15:59:05.080153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.641 [2024-10-01 15:59:05.080160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.641 [2024-10-01 15:59:05.080173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.641 [2024-10-01 15:59:05.080182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.641 [2024-10-01 15:59:05.089539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.641 [2024-10-01 15:59:05.089570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.641 [2024-10-01 15:59:05.089777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-10-01 15:59:05.089791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.641 [2024-10-01 15:59:05.089799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.641 [2024-10-01 15:59:05.090940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-10-01 15:59:05.090961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.641 [2024-10-01 15:59:05.090970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.641 [2024-10-01 15:59:05.090982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.641 [2024-10-01 15:59:05.091230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.641 [2024-10-01 15:59:05.091244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.641 [2024-10-01 15:59:05.091252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.641 [2024-10-01 15:59:05.091259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.641 [2024-10-01 15:59:05.091298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.641 [2024-10-01 15:59:05.091306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.641 [2024-10-01 15:59:05.091313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.641 [2024-10-01 15:59:05.091319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.641 [2024-10-01 15:59:05.091332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.641 [2024-10-01 15:59:05.101866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.641 [2024-10-01 15:59:05.101889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.641 [2024-10-01 15:59:05.102257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-10-01 15:59:05.102275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.641 [2024-10-01 15:59:05.102284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.641 [2024-10-01 15:59:05.102423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-10-01 15:59:05.102434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.641 [2024-10-01 15:59:05.102441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.641 [2024-10-01 15:59:05.102616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.641 [2024-10-01 15:59:05.102630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.641 [2024-10-01 15:59:05.102782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.641 [2024-10-01 15:59:05.102795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.641 [2024-10-01 15:59:05.102809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.641 [2024-10-01 15:59:05.102820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.641 [2024-10-01 15:59:05.102826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.641 [2024-10-01 15:59:05.102833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.641 [2024-10-01 15:59:05.102871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.641 [2024-10-01 15:59:05.102879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.641 [2024-10-01 15:59:05.112777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.641 [2024-10-01 15:59:05.112799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.641 [2024-10-01 15:59:05.113214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-10-01 15:59:05.113233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.641 [2024-10-01 15:59:05.113241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.641 [2024-10-01 15:59:05.113492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-10-01 15:59:05.113504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.641 [2024-10-01 15:59:05.113511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.641 [2024-10-01 15:59:05.113764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.641 [2024-10-01 15:59:05.113779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.641 [2024-10-01 15:59:05.113934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.641 [2024-10-01 15:59:05.113946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.641 [2024-10-01 15:59:05.113953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.641 [2024-10-01 15:59:05.113963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.641 [2024-10-01 15:59:05.113970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.641 [2024-10-01 15:59:05.113977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.641 [2024-10-01 15:59:05.114008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.641 [2024-10-01 15:59:05.114015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.641 [2024-10-01 15:59:05.123969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.641 [2024-10-01 15:59:05.123991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.641 [2024-10-01 15:59:05.124216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-10-01 15:59:05.124230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.641 [2024-10-01 15:59:05.124237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.641 [2024-10-01 15:59:05.124359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-10-01 15:59:05.124370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.641 [2024-10-01 15:59:05.124381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.641 [2024-10-01 15:59:05.124393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.641 [2024-10-01 15:59:05.124402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.641 [2024-10-01 15:59:05.124419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.641 [2024-10-01 15:59:05.124427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.641 [2024-10-01 15:59:05.124434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.641 [2024-10-01 15:59:05.124443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.641 [2024-10-01 15:59:05.124449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.641 [2024-10-01 15:59:05.124455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.641 [2024-10-01 15:59:05.124470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.641 [2024-10-01 15:59:05.124478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.641 [2024-10-01 15:59:05.134050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.641 [2024-10-01 15:59:05.134081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.641 [2024-10-01 15:59:05.134258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-10-01 15:59:05.134272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.641 [2024-10-01 15:59:05.134279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.641 [2024-10-01 15:59:05.134525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-10-01 15:59:05.134536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.642 [2024-10-01 15:59:05.134543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.642 [2024-10-01 15:59:05.134552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.642 [2024-10-01 15:59:05.134563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.642 [2024-10-01 15:59:05.134572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.642 [2024-10-01 15:59:05.134578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.642 [2024-10-01 15:59:05.134585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.642 [2024-10-01 15:59:05.134598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.642 [2024-10-01 15:59:05.134605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.642 [2024-10-01 15:59:05.134612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.642 [2024-10-01 15:59:05.134619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.642 [2024-10-01 15:59:05.134631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.642 [2024-10-01 15:59:05.145889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.642 [2024-10-01 15:59:05.145916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.642 [2024-10-01 15:59:05.146184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-10-01 15:59:05.146200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.642 [2024-10-01 15:59:05.146209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.642 [2024-10-01 15:59:05.146402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-10-01 15:59:05.146415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.642 [2024-10-01 15:59:05.146422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.642 [2024-10-01 15:59:05.146443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.642 [2024-10-01 15:59:05.146453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.642 [2024-10-01 15:59:05.146463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.642 [2024-10-01 15:59:05.146470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.642 [2024-10-01 15:59:05.146477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.642 [2024-10-01 15:59:05.146486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.642 [2024-10-01 15:59:05.146493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.642 [2024-10-01 15:59:05.146499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.642 [2024-10-01 15:59:05.146513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.642 [2024-10-01 15:59:05.146519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.642 [2024-10-01 15:59:05.157249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.642 [2024-10-01 15:59:05.157272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.642 [2024-10-01 15:59:05.157442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-10-01 15:59:05.157455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.642 [2024-10-01 15:59:05.157463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.642 [2024-10-01 15:59:05.157669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-10-01 15:59:05.157680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.642 [2024-10-01 15:59:05.157687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.642 [2024-10-01 15:59:05.158563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.642 [2024-10-01 15:59:05.158579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.642 [2024-10-01 15:59:05.158934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.642 [2024-10-01 15:59:05.158947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.642 [2024-10-01 15:59:05.158954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.642 [2024-10-01 15:59:05.158967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.642 [2024-10-01 15:59:05.158974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.642 [2024-10-01 15:59:05.158981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.642 [2024-10-01 15:59:05.159340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.642 [2024-10-01 15:59:05.159353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.642 [2024-10-01 15:59:05.168544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.642 [2024-10-01 15:59:05.168565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.642 [2024-10-01 15:59:05.168807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-10-01 15:59:05.168821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.642 [2024-10-01 15:59:05.168828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.642 [2024-10-01 15:59:05.168977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-10-01 15:59:05.168988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.642 [2024-10-01 15:59:05.168995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.642 [2024-10-01 15:59:05.169007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.642 [2024-10-01 15:59:05.169016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.642 [2024-10-01 15:59:05.169027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.642 [2024-10-01 15:59:05.169034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.642 [2024-10-01 15:59:05.169041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.642 [2024-10-01 15:59:05.169050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.642 [2024-10-01 15:59:05.169056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.642 [2024-10-01 15:59:05.169062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.642 [2024-10-01 15:59:05.169076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.642 [2024-10-01 15:59:05.169084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.642 [2024-10-01 15:59:05.181115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.642 [2024-10-01 15:59:05.181137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.642 [2024-10-01 15:59:05.181612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-10-01 15:59:05.181631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.642 [2024-10-01 15:59:05.181639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.642 [2024-10-01 15:59:05.181833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-10-01 15:59:05.181846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.642 [2024-10-01 15:59:05.181853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.642 [2024-10-01 15:59:05.182571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.642 [2024-10-01 15:59:05.182590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.642 [2024-10-01 15:59:05.182909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.642 [2024-10-01 15:59:05.182922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.642 [2024-10-01 15:59:05.182929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.642 [2024-10-01 15:59:05.182940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.642 [2024-10-01 15:59:05.182947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.642 [2024-10-01 15:59:05.182954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.642 [2024-10-01 15:59:05.182997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.643 [2024-10-01 15:59:05.183005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.643 [2024-10-01 15:59:05.191528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.643 [2024-10-01 15:59:05.191550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.643 [2024-10-01 15:59:05.191716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-10-01 15:59:05.191730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.643 [2024-10-01 15:59:05.191738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.643 [2024-10-01 15:59:05.191953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-10-01 15:59:05.191965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.643 [2024-10-01 15:59:05.191974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.643 [2024-10-01 15:59:05.192099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.643 [2024-10-01 15:59:05.192113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.643 [2024-10-01 15:59:05.192209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.643 [2024-10-01 15:59:05.192220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.643 [2024-10-01 15:59:05.192227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.643 [2024-10-01 15:59:05.192236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.643 [2024-10-01 15:59:05.192242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.643 [2024-10-01 15:59:05.192249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.643 [2024-10-01 15:59:05.192275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.643 [2024-10-01 15:59:05.192283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.643 [2024-10-01 15:59:05.202660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.643 [2024-10-01 15:59:05.202681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.643 [2024-10-01 15:59:05.202940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-10-01 15:59:05.202954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.643 [2024-10-01 15:59:05.202963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.643 [2024-10-01 15:59:05.203131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-10-01 15:59:05.203142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.643 [2024-10-01 15:59:05.203150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.643 [2024-10-01 15:59:05.203303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.643 [2024-10-01 15:59:05.203318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.643 [2024-10-01 15:59:05.203443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.643 [2024-10-01 15:59:05.203455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.643 [2024-10-01 15:59:05.203462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.643 [2024-10-01 15:59:05.203471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.643 [2024-10-01 15:59:05.203479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.643 [2024-10-01 15:59:05.203485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.643 [2024-10-01 15:59:05.203509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.643 [2024-10-01 15:59:05.203517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.643 [2024-10-01 15:59:05.213047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.643 [2024-10-01 15:59:05.213070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.643 [2024-10-01 15:59:05.213307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-10-01 15:59:05.213321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.643 [2024-10-01 15:59:05.213328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.643 [2024-10-01 15:59:05.213523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-10-01 15:59:05.213535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.643 [2024-10-01 15:59:05.213544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.643 [2024-10-01 15:59:05.213557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.643 [2024-10-01 15:59:05.213566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.643 [2024-10-01 15:59:05.213576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.643 [2024-10-01 15:59:05.213583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.643 [2024-10-01 15:59:05.213590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.643 [2024-10-01 15:59:05.213599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.643 [2024-10-01 15:59:05.213609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.643 [2024-10-01 15:59:05.213615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.643 [2024-10-01 15:59:05.213629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.643 [2024-10-01 15:59:05.213636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.643 [2024-10-01 15:59:05.223996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.643 [2024-10-01 15:59:05.224018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.643 [2024-10-01 15:59:05.224251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-10-01 15:59:05.224265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.643 [2024-10-01 15:59:05.224273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.643 [2024-10-01 15:59:05.224432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-10-01 15:59:05.224443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.643 [2024-10-01 15:59:05.224451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.643 [2024-10-01 15:59:05.224462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.643 [2024-10-01 15:59:05.224472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.643 [2024-10-01 15:59:05.224481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.643 [2024-10-01 15:59:05.224489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.643 [2024-10-01 15:59:05.224496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.643 [2024-10-01 15:59:05.224505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.643 [2024-10-01 15:59:05.224511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.643 [2024-10-01 15:59:05.224517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.643 [2024-10-01 15:59:05.224531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.643 [2024-10-01 15:59:05.224538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.643 [2024-10-01 15:59:05.234482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.643 [2024-10-01 15:59:05.234504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.643 [2024-10-01 15:59:05.234616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-10-01 15:59:05.234630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.643 [2024-10-01 15:59:05.234637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.643 [2024-10-01 15:59:05.234851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-10-01 15:59:05.234867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.643 [2024-10-01 15:59:05.234875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.643 [2024-10-01 15:59:05.234887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.643 [2024-10-01 15:59:05.234900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.643 [2024-10-01 15:59:05.234910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.643 [2024-10-01 15:59:05.234918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.643 [2024-10-01 15:59:05.234924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.643 [2024-10-01 15:59:05.234934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.643 [2024-10-01 15:59:05.234939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.643 [2024-10-01 15:59:05.234946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.643 [2024-10-01 15:59:05.234960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.643 [2024-10-01 15:59:05.234968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.643 [2024-10-01 15:59:05.246490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.643 [2024-10-01 15:59:05.246514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.643 [2024-10-01 15:59:05.246752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-10-01 15:59:05.246765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.644 [2024-10-01 15:59:05.246773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.644 [2024-10-01 15:59:05.246929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-10-01 15:59:05.246940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.644 [2024-10-01 15:59:05.246948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.644 [2024-10-01 15:59:05.247768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.644 [2024-10-01 15:59:05.247784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.644 [2024-10-01 15:59:05.248405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.644 [2024-10-01 15:59:05.248419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.644 [2024-10-01 15:59:05.248426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.644 [2024-10-01 15:59:05.248437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.644 [2024-10-01 15:59:05.248444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.644 [2024-10-01 15:59:05.248451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.644 [2024-10-01 15:59:05.248734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.644 [2024-10-01 15:59:05.248745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.644 [2024-10-01 15:59:05.258349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.644 [2024-10-01 15:59:05.258373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.644 [2024-10-01 15:59:05.258725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-10-01 15:59:05.258743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.644 [2024-10-01 15:59:05.258755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.644 [2024-10-01 15:59:05.258843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-10-01 15:59:05.258854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.644 [2024-10-01 15:59:05.258861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.644 [2024-10-01 15:59:05.259071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.644 [2024-10-01 15:59:05.259085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.644 [2024-10-01 15:59:05.259228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.644 [2024-10-01 15:59:05.259240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.644 [2024-10-01 15:59:05.259247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.644 [2024-10-01 15:59:05.259257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.644 [2024-10-01 15:59:05.259264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.644 [2024-10-01 15:59:05.259271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.644 [2024-10-01 15:59:05.259299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.644 [2024-10-01 15:59:05.259306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.644 [2024-10-01 15:59:05.269886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.644 [2024-10-01 15:59:05.269908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.644 [2024-10-01 15:59:05.270151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-10-01 15:59:05.270166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.644 [2024-10-01 15:59:05.270174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.644 [2024-10-01 15:59:05.270369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-10-01 15:59:05.270383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.644 [2024-10-01 15:59:05.270390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.644 [2024-10-01 15:59:05.270534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.644 [2024-10-01 15:59:05.270547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.644 [2024-10-01 15:59:05.270686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.644 [2024-10-01 15:59:05.270697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.644 [2024-10-01 15:59:05.270704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.644 [2024-10-01 15:59:05.270715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.644 [2024-10-01 15:59:05.270721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.644 [2024-10-01 15:59:05.270732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.644 [2024-10-01 15:59:05.270763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.644 [2024-10-01 15:59:05.270771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.644 [2024-10-01 15:59:05.282005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.644 [2024-10-01 15:59:05.282027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.644 [2024-10-01 15:59:05.282218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-10-01 15:59:05.282232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.644 [2024-10-01 15:59:05.282240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.644 [2024-10-01 15:59:05.282431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-10-01 15:59:05.282442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.644 [2024-10-01 15:59:05.282450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.644 [2024-10-01 15:59:05.282462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.644 [2024-10-01 15:59:05.282471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.644 [2024-10-01 15:59:05.282489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.644 [2024-10-01 15:59:05.282497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.644 [2024-10-01 15:59:05.282504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.644 [2024-10-01 15:59:05.282513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.644 [2024-10-01 15:59:05.282519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.644 [2024-10-01 15:59:05.282525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.644 [2024-10-01 15:59:05.282538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.644 [2024-10-01 15:59:05.282546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.644 [2024-10-01 15:59:05.293059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.644 [2024-10-01 15:59:05.293080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.644 [2024-10-01 15:59:05.293271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-10-01 15:59:05.293285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.644 [2024-10-01 15:59:05.293293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.644 [2024-10-01 15:59:05.293437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-10-01 15:59:05.293448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.644 [2024-10-01 15:59:05.293456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.644 [2024-10-01 15:59:05.293469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.644 [2024-10-01 15:59:05.293478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.644 [2024-10-01 15:59:05.293492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.644 [2024-10-01 15:59:05.293498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.644 [2024-10-01 15:59:05.293507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.644 [2024-10-01 15:59:05.293516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.644 [2024-10-01 15:59:05.293523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.644 [2024-10-01 15:59:05.293530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.644 [2024-10-01 15:59:05.293544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.644 [2024-10-01 15:59:05.293551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.644 [2024-10-01 15:59:05.304286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.644 [2024-10-01 15:59:05.304308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.644 [2024-10-01 15:59:05.304473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-10-01 15:59:05.304487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.644 [2024-10-01 15:59:05.304494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.644 [2024-10-01 15:59:05.304689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-10-01 15:59:05.304700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.645 [2024-10-01 15:59:05.304707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.645 [2024-10-01 15:59:05.304719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.645 [2024-10-01 15:59:05.304728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.645 [2024-10-01 15:59:05.304747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.645 [2024-10-01 15:59:05.304755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.645 [2024-10-01 15:59:05.304762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.645 [2024-10-01 15:59:05.304771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.645 [2024-10-01 15:59:05.304777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.645 [2024-10-01 15:59:05.304783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.645 [2024-10-01 15:59:05.304797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.645 [2024-10-01 15:59:05.304805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.645 [2024-10-01 15:59:05.314784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.645 [2024-10-01 15:59:05.314806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.645 [2024-10-01 15:59:05.314964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-10-01 15:59:05.314978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.645 [2024-10-01 15:59:05.314987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.645 [2024-10-01 15:59:05.315148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-10-01 15:59:05.315159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.645 [2024-10-01 15:59:05.315167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.645 [2024-10-01 15:59:05.315179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.645 [2024-10-01 15:59:05.315189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.645 [2024-10-01 15:59:05.315199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.645 [2024-10-01 15:59:05.315205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.645 [2024-10-01 15:59:05.315211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.645 [2024-10-01 15:59:05.315220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.645 [2024-10-01 15:59:05.315227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.645 [2024-10-01 15:59:05.315234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.645 [2024-10-01 15:59:05.315248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.645 [2024-10-01 15:59:05.315254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.645 [2024-10-01 15:59:05.325222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.645 [2024-10-01 15:59:05.325244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.645 [2024-10-01 15:59:05.325409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-10-01 15:59:05.325423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.645 [2024-10-01 15:59:05.325430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.645 [2024-10-01 15:59:05.325570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-10-01 15:59:05.325581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.645 [2024-10-01 15:59:05.325588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.645 [2024-10-01 15:59:05.325600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.645 [2024-10-01 15:59:05.325609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.645 [2024-10-01 15:59:05.325619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.645 [2024-10-01 15:59:05.325626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.645 [2024-10-01 15:59:05.325633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.645 [2024-10-01 15:59:05.325642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.645 [2024-10-01 15:59:05.325648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.645 [2024-10-01 15:59:05.325654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.645 [2024-10-01 15:59:05.325668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.645 [2024-10-01 15:59:05.325678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.645 [2024-10-01 15:59:05.335806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.645 [2024-10-01 15:59:05.335827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.645 [2024-10-01 15:59:05.336016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-10-01 15:59:05.336030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.645 [2024-10-01 15:59:05.336038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.645 [2024-10-01 15:59:05.336159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-10-01 15:59:05.336171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.645 [2024-10-01 15:59:05.336178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.645 [2024-10-01 15:59:05.336190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.645 [2024-10-01 15:59:05.336200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.645 [2024-10-01 15:59:05.336210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.645 [2024-10-01 15:59:05.336218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.645 [2024-10-01 15:59:05.336224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.645 [2024-10-01 15:59:05.336233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.645 [2024-10-01 15:59:05.336239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.645 [2024-10-01 15:59:05.336246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.645 [2024-10-01 15:59:05.336259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.645 [2024-10-01 15:59:05.336266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.645 [2024-10-01 15:59:05.347843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.645 [2024-10-01 15:59:05.348046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.645 [2024-10-01 15:59:05.348217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-10-01 15:59:05.348233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.645 [2024-10-01 15:59:05.348241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.645 [2024-10-01 15:59:05.348386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-10-01 15:59:05.348396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.645 [2024-10-01 15:59:05.348403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.645 [2024-10-01 15:59:05.349068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.645 [2024-10-01 15:59:05.349086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.645 [2024-10-01 15:59:05.349221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.645 [2024-10-01 15:59:05.349235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.645 [2024-10-01 15:59:05.349242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.645 [2024-10-01 15:59:05.349252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.645 [2024-10-01 15:59:05.349259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.645 [2024-10-01 15:59:05.349267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.645 [2024-10-01 15:59:05.350092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.645 [2024-10-01 15:59:05.350108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.645 [2024-10-01 15:59:05.358306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.645 [2024-10-01 15:59:05.358328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.645 [2024-10-01 15:59:05.358542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-10-01 15:59:05.358556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.645 [2024-10-01 15:59:05.358564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.645 [2024-10-01 15:59:05.358705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-10-01 15:59:05.358715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.645 [2024-10-01 15:59:05.358722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.645 [2024-10-01 15:59:05.358852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.646 [2024-10-01 15:59:05.358872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.646 [2024-10-01 15:59:05.358900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.646 [2024-10-01 15:59:05.358909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.646 [2024-10-01 15:59:05.358915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.646 [2024-10-01 15:59:05.358924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.646 [2024-10-01 15:59:05.358931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.646 [2024-10-01 15:59:05.358939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.646 [2024-10-01 15:59:05.358953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.646 [2024-10-01 15:59:05.358960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.646 [2024-10-01 15:59:05.369776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.646 [2024-10-01 15:59:05.369797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.646 [2024-10-01 15:59:05.370014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-10-01 15:59:05.370030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.646 [2024-10-01 15:59:05.370038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.646 [2024-10-01 15:59:05.370181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-10-01 15:59:05.370195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.646 [2024-10-01 15:59:05.370203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.646 [2024-10-01 15:59:05.370216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.646 [2024-10-01 15:59:05.370225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.646 [2024-10-01 15:59:05.370235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.646 [2024-10-01 15:59:05.370242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.646 [2024-10-01 15:59:05.370248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.646 [2024-10-01 15:59:05.370258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.646 [2024-10-01 15:59:05.370265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.646 [2024-10-01 15:59:05.370271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.646 [2024-10-01 15:59:05.370285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.646 [2024-10-01 15:59:05.370291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.646 [2024-10-01 15:59:05.380546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.646 [2024-10-01 15:59:05.380568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.646 [2024-10-01 15:59:05.380819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-10-01 15:59:05.380833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.646 [2024-10-01 15:59:05.380841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.646 [2024-10-01 15:59:05.381061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-10-01 15:59:05.381074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.646 [2024-10-01 15:59:05.381081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.646 [2024-10-01 15:59:05.381321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.646 [2024-10-01 15:59:05.381336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.646 [2024-10-01 15:59:05.381373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.646 [2024-10-01 15:59:05.381381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.646 [2024-10-01 15:59:05.381388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.646 [2024-10-01 15:59:05.381398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.646 [2024-10-01 15:59:05.381404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.646 [2024-10-01 15:59:05.381411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.646 [2024-10-01 15:59:05.381539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.646 [2024-10-01 15:59:05.381550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.646 [2024-10-01 15:59:05.391353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.646 [2024-10-01 15:59:05.391375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.646 [2024-10-01 15:59:05.391497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-10-01 15:59:05.391510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.646 [2024-10-01 15:59:05.391518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.646 [2024-10-01 15:59:05.391656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-10-01 15:59:05.391667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.646 [2024-10-01 15:59:05.391674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.646 [2024-10-01 15:59:05.391686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.646 [2024-10-01 15:59:05.391695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.646 [2024-10-01 15:59:05.391706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.646 [2024-10-01 15:59:05.391713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.646 [2024-10-01 15:59:05.391720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.646 [2024-10-01 15:59:05.391731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.646 [2024-10-01 15:59:05.391737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.646 [2024-10-01 15:59:05.391743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.646 [2024-10-01 15:59:05.391756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.646 [2024-10-01 15:59:05.391763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.646 [2024-10-01 15:59:05.402592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.646 [2024-10-01 15:59:05.402615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.646 [2024-10-01 15:59:05.402833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-10-01 15:59:05.402847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.646 [2024-10-01 15:59:05.402855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.646 [2024-10-01 15:59:05.403078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-10-01 15:59:05.403091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.646 [2024-10-01 15:59:05.403098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.646 [2024-10-01 15:59:05.403110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.646 [2024-10-01 15:59:05.403119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.646 [2024-10-01 15:59:05.403130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.646 [2024-10-01 15:59:05.403136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.646 [2024-10-01 15:59:05.403146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.646 [2024-10-01 15:59:05.403154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.646 [2024-10-01 15:59:05.403161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.646 [2024-10-01 15:59:05.403167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.646 [2024-10-01 15:59:05.403181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.646 [2024-10-01 15:59:05.403188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.646 [2024-10-01 15:59:05.413740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.646 [2024-10-01 15:59:05.413763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.646 [2024-10-01 15:59:05.414105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-10-01 15:59:05.414123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.646 [2024-10-01 15:59:05.414131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.646 [2024-10-01 15:59:05.414328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-10-01 15:59:05.414340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.646 [2024-10-01 15:59:05.414348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.646 [2024-10-01 15:59:05.414501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.646 [2024-10-01 15:59:05.414514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.646 [2024-10-01 15:59:05.414652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.647 [2024-10-01 15:59:05.414664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.647 [2024-10-01 15:59:05.414671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.647 [2024-10-01 15:59:05.414682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.647 [2024-10-01 15:59:05.414689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.647 [2024-10-01 15:59:05.414695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.647 [2024-10-01 15:59:05.414725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.647 [2024-10-01 15:59:05.414732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.647 [2024-10-01 15:59:05.424349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.647 [2024-10-01 15:59:05.424371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.647 [2024-10-01 15:59:05.424528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-10-01 15:59:05.424541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.647 [2024-10-01 15:59:05.424550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.647 [2024-10-01 15:59:05.424699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-10-01 15:59:05.424710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.647 [2024-10-01 15:59:05.424721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.647 [2024-10-01 15:59:05.424733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.647 [2024-10-01 15:59:05.424742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.647 [2024-10-01 15:59:05.424752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.647 [2024-10-01 15:59:05.424759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.647 [2024-10-01 15:59:05.424766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.647 [2024-10-01 15:59:05.424774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.647 [2024-10-01 15:59:05.424781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.647 [2024-10-01 15:59:05.424787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.647 [2024-10-01 15:59:05.424801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.647 [2024-10-01 15:59:05.424808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.647 [2024-10-01 15:59:05.435885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.647 [2024-10-01 15:59:05.435908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.647 [2024-10-01 15:59:05.436115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-10-01 15:59:05.436129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.647 [2024-10-01 15:59:05.436137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.647 [2024-10-01 15:59:05.436269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-10-01 15:59:05.436280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.647 [2024-10-01 15:59:05.436287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.647 [2024-10-01 15:59:05.436300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.647 [2024-10-01 15:59:05.436310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.647 [2024-10-01 15:59:05.436327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.647 [2024-10-01 15:59:05.436334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.647 [2024-10-01 15:59:05.436340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.647 [2024-10-01 15:59:05.436350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.647 [2024-10-01 15:59:05.436357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.647 [2024-10-01 15:59:05.436364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.647 [2024-10-01 15:59:05.436480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.647 [2024-10-01 15:59:05.436491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.647 [2024-10-01 15:59:05.446743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.647 [2024-10-01 15:59:05.446768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.647 [2024-10-01 15:59:05.446989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-10-01 15:59:05.447003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.647 [2024-10-01 15:59:05.447011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.647 [2024-10-01 15:59:05.447155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-10-01 15:59:05.447166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.647 [2024-10-01 15:59:05.447174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.647 [2024-10-01 15:59:05.447185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.647 [2024-10-01 15:59:05.447194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.647 [2024-10-01 15:59:05.447204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.647 [2024-10-01 15:59:05.447211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.647 [2024-10-01 15:59:05.447218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.647 [2024-10-01 15:59:05.447227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.647 [2024-10-01 15:59:05.447233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.647 [2024-10-01 15:59:05.447240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.647 [2024-10-01 15:59:05.447253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.647 [2024-10-01 15:59:05.447260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.647 [2024-10-01 15:59:05.458766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.647 [2024-10-01 15:59:05.458788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.647 [2024-10-01 15:59:05.458904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-10-01 15:59:05.458918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.647 [2024-10-01 15:59:05.458926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.647 [2024-10-01 15:59:05.459155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-10-01 15:59:05.459167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.647 [2024-10-01 15:59:05.459174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.647 [2024-10-01 15:59:05.459381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.647 [2024-10-01 15:59:05.459397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.647 [2024-10-01 15:59:05.459567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.647 [2024-10-01 15:59:05.459579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.647 [2024-10-01 15:59:05.459585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.647 [2024-10-01 15:59:05.459599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.647 [2024-10-01 15:59:05.459606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.647 [2024-10-01 15:59:05.459613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.647 [2024-10-01 15:59:05.459640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.647 [2024-10-01 15:59:05.459647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.647 [2024-10-01 15:59:05.468846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.647 [2024-10-01 15:59:05.468882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.647 [2024-10-01 15:59:05.469113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-10-01 15:59:05.469128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.648 [2024-10-01 15:59:05.469137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.648 [2024-10-01 15:59:05.469338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-10-01 15:59:05.469351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.648 [2024-10-01 15:59:05.469358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.648 [2024-10-01 15:59:05.469367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.648 [2024-10-01 15:59:05.469380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.648 [2024-10-01 15:59:05.469388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.648 [2024-10-01 15:59:05.469394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.648 [2024-10-01 15:59:05.469401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.648 [2024-10-01 15:59:05.469414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.648 [2024-10-01 15:59:05.469421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.648 [2024-10-01 15:59:05.469429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.648 [2024-10-01 15:59:05.469438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.648 [2024-10-01 15:59:05.469451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.648 [2024-10-01 15:59:05.479722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.648 [2024-10-01 15:59:05.479745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.648 [2024-10-01 15:59:05.479966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-10-01 15:59:05.479982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.648 [2024-10-01 15:59:05.479990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.648 [2024-10-01 15:59:05.480167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-10-01 15:59:05.480177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.648 [2024-10-01 15:59:05.480184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.648 [2024-10-01 15:59:05.480318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.648 [2024-10-01 15:59:05.480331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.648 [2024-10-01 15:59:05.480359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.648 [2024-10-01 15:59:05.480366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.648 [2024-10-01 15:59:05.480374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.648 [2024-10-01 15:59:05.480383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.648 [2024-10-01 15:59:05.480389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.648 [2024-10-01 15:59:05.480395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.648 [2024-10-01 15:59:05.480409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.648 [2024-10-01 15:59:05.480416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.648 [2024-10-01 15:59:05.491127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.648 [2024-10-01 15:59:05.491149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.648 [2024-10-01 15:59:05.491442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-10-01 15:59:05.491459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.648 [2024-10-01 15:59:05.491467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.648 [2024-10-01 15:59:05.491664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-10-01 15:59:05.491676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.648 [2024-10-01 15:59:05.491684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.648 [2024-10-01 15:59:05.491713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.648 [2024-10-01 15:59:05.491724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.648 [2024-10-01 15:59:05.491734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.648 [2024-10-01 15:59:05.491740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.648 [2024-10-01 15:59:05.491747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.648 [2024-10-01 15:59:05.491757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.648 [2024-10-01 15:59:05.491764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.648 [2024-10-01 15:59:05.491770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.648 [2024-10-01 15:59:05.491785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.648 [2024-10-01 15:59:05.491791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.648 [2024-10-01 15:59:05.501788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.648 [2024-10-01 15:59:05.501810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.648 [2024-10-01 15:59:05.502010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-10-01 15:59:05.502023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.648 [2024-10-01 15:59:05.502033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.648 [2024-10-01 15:59:05.502249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-10-01 15:59:05.502259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.648 [2024-10-01 15:59:05.502266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.648 [2024-10-01 15:59:05.502460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.648 [2024-10-01 15:59:05.502474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.648 [2024-10-01 15:59:05.502567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.648 [2024-10-01 15:59:05.502577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.648 [2024-10-01 15:59:05.502584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.648 [2024-10-01 15:59:05.502593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.648 [2024-10-01 15:59:05.502599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.648 [2024-10-01 15:59:05.502606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.648 [2024-10-01 15:59:05.502626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.648 [2024-10-01 15:59:05.502635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.648 [2024-10-01 15:59:05.512076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.648 [2024-10-01 15:59:05.512098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.648 [2024-10-01 15:59:05.512307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-10-01 15:59:05.512321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.648 [2024-10-01 15:59:05.512328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.648 [2024-10-01 15:59:05.512469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-10-01 15:59:05.512479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.648 [2024-10-01 15:59:05.512487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.648 [2024-10-01 15:59:05.512500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.648 [2024-10-01 15:59:05.512510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.648 [2024-10-01 15:59:05.512520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.648 [2024-10-01 15:59:05.512527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.648 [2024-10-01 15:59:05.512533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.648 [2024-10-01 15:59:05.512542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.648 [2024-10-01 15:59:05.512552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.648 [2024-10-01 15:59:05.512559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.648 [2024-10-01 15:59:05.512573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.648 [2024-10-01 15:59:05.512580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.648 [2024-10-01 15:59:05.522766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.648 [2024-10-01 15:59:05.522788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.648 [2024-10-01 15:59:05.522984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-10-01 15:59:05.522998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.648 [2024-10-01 15:59:05.523005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.648 [2024-10-01 15:59:05.523149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-10-01 15:59:05.523160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.649 [2024-10-01 15:59:05.523167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.649 [2024-10-01 15:59:05.523407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.649 [2024-10-01 15:59:05.523422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.649 [2024-10-01 15:59:05.523458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.649 [2024-10-01 15:59:05.523467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.649 [2024-10-01 15:59:05.523474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.649 [2024-10-01 15:59:05.523483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.649 [2024-10-01 15:59:05.523489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.649 [2024-10-01 15:59:05.523496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.649 [2024-10-01 15:59:05.523510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.649 [2024-10-01 15:59:05.523517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.649 [2024-10-01 15:59:05.534749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.649 [2024-10-01 15:59:05.534771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.649 [2024-10-01 15:59:05.535064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-10-01 15:59:05.535082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.649 [2024-10-01 15:59:05.535091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.649 [2024-10-01 15:59:05.535233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-10-01 15:59:05.535244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.649 [2024-10-01 15:59:05.535251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.649 [2024-10-01 15:59:05.535455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.649 [2024-10-01 15:59:05.535473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.649 [2024-10-01 15:59:05.535505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.649 [2024-10-01 15:59:05.535514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.649 [2024-10-01 15:59:05.535521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.649 [2024-10-01 15:59:05.535530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.649 [2024-10-01 15:59:05.535536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.649 [2024-10-01 15:59:05.535542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.649 [2024-10-01 15:59:05.535671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.649 [2024-10-01 15:59:05.535681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.649 [2024-10-01 15:59:05.544931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.649 [2024-10-01 15:59:05.544954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.649 [2024-10-01 15:59:05.545114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-10-01 15:59:05.545128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.649 [2024-10-01 15:59:05.545136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.649 [2024-10-01 15:59:05.545262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-10-01 15:59:05.545272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.649 [2024-10-01 15:59:05.545279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.649 [2024-10-01 15:59:05.545291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.649 [2024-10-01 15:59:05.545300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.649 [2024-10-01 15:59:05.545311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.649 [2024-10-01 15:59:05.545317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.649 [2024-10-01 15:59:05.545324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.649 [2024-10-01 15:59:05.545333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.649 [2024-10-01 15:59:05.545339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.649 [2024-10-01 15:59:05.545345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.649 [2024-10-01 15:59:05.545359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.649 [2024-10-01 15:59:05.545367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.649 [2024-10-01 15:59:05.557250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.649 [2024-10-01 15:59:05.557271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.649 [2024-10-01 15:59:05.557484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-10-01 15:59:05.557497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.649 [2024-10-01 15:59:05.557513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.649 [2024-10-01 15:59:05.557707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-10-01 15:59:05.557719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.649 [2024-10-01 15:59:05.557726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.649 [2024-10-01 15:59:05.558188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.649 [2024-10-01 15:59:05.558204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.649 [2024-10-01 15:59:05.558479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.649 [2024-10-01 15:59:05.558492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.649 [2024-10-01 15:59:05.558499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.649 [2024-10-01 15:59:05.558508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.649 [2024-10-01 15:59:05.558515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.649 [2024-10-01 15:59:05.558521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.649 [2024-10-01 15:59:05.558674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.649 [2024-10-01 15:59:05.558685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.649 [2024-10-01 15:59:05.567503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.649 [2024-10-01 15:59:05.567524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.649 [2024-10-01 15:59:05.567767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-10-01 15:59:05.567781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.649 [2024-10-01 15:59:05.567789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.649 [2024-10-01 15:59:05.567946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-10-01 15:59:05.567957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.649 [2024-10-01 15:59:05.567965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.649 [2024-10-01 15:59:05.567977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.649 [2024-10-01 15:59:05.567988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.649 [2024-10-01 15:59:05.567998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.649 [2024-10-01 15:59:05.568004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.649 [2024-10-01 15:59:05.568012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.649 [2024-10-01 15:59:05.568020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.649 [2024-10-01 15:59:05.568026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.649 [2024-10-01 15:59:05.568037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.649 [2024-10-01 15:59:05.568050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.649 [2024-10-01 15:59:05.568057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.649 [2024-10-01 15:59:05.579825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.649 [2024-10-01 15:59:05.579847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.649 [2024-10-01 15:59:05.579955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-10-01 15:59:05.579968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.649 [2024-10-01 15:59:05.579976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.649 [2024-10-01 15:59:05.580192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-10-01 15:59:05.580203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.649 [2024-10-01 15:59:05.580210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.649 [2024-10-01 15:59:05.580664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.649 [2024-10-01 15:59:05.580680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.650 [2024-10-01 15:59:05.580961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.650 [2024-10-01 15:59:05.580973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.650 [2024-10-01 15:59:05.580981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.650 [2024-10-01 15:59:05.580990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.650 [2024-10-01 15:59:05.580997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.650 [2024-10-01 15:59:05.581004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.650 [2024-10-01 15:59:05.581158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.650 [2024-10-01 15:59:05.581168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.650 [2024-10-01 15:59:05.590849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.650 [2024-10-01 15:59:05.590877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.650 [2024-10-01 15:59:05.591137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-10-01 15:59:05.591151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.650 [2024-10-01 15:59:05.591159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.650 [2024-10-01 15:59:05.591301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-10-01 15:59:05.591311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.650 [2024-10-01 15:59:05.591318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.650 [2024-10-01 15:59:05.591773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.650 [2024-10-01 15:59:05.591788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.650 [2024-10-01 15:59:05.591965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.650 [2024-10-01 15:59:05.591979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.650 [2024-10-01 15:59:05.591986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.650 [2024-10-01 15:59:05.591996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.650 [2024-10-01 15:59:05.592002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.650 [2024-10-01 15:59:05.592009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.650 [2024-10-01 15:59:05.592152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.650 [2024-10-01 15:59:05.592163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.650 [2024-10-01 15:59:05.601753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.650 [2024-10-01 15:59:05.601774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.650 [2024-10-01 15:59:05.601888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-10-01 15:59:05.601901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.650 [2024-10-01 15:59:05.601909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.650 [2024-10-01 15:59:05.602129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-10-01 15:59:05.602141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.650 [2024-10-01 15:59:05.602149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.650 [2024-10-01 15:59:05.602160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.650 [2024-10-01 15:59:05.602170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.650 [2024-10-01 15:59:05.602181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.650 [2024-10-01 15:59:05.602188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.650 [2024-10-01 15:59:05.602195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.650 [2024-10-01 15:59:05.602204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.650 [2024-10-01 15:59:05.602210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.650 [2024-10-01 15:59:05.602217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.650 [2024-10-01 15:59:05.602230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.650 [2024-10-01 15:59:05.602237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.650 [2024-10-01 15:59:05.614184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.650 [2024-10-01 15:59:05.614208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.650 [2024-10-01 15:59:05.614514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-10-01 15:59:05.614531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.650 [2024-10-01 15:59:05.614539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.650 [2024-10-01 15:59:05.614737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-10-01 15:59:05.614749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.650 [2024-10-01 15:59:05.614756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.650 [2024-10-01 15:59:05.615001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.650 [2024-10-01 15:59:05.615018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.650 [2024-10-01 15:59:05.615126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.650 [2024-10-01 15:59:05.615136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.650 [2024-10-01 15:59:05.615142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.650 [2024-10-01 15:59:05.615153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.650 [2024-10-01 15:59:05.615160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.650 [2024-10-01 15:59:05.615166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.650 [2024-10-01 15:59:05.615188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.650 [2024-10-01 15:59:05.615196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.650 [2024-10-01 15:59:05.624968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.650 [2024-10-01 15:59:05.624990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.650 [2024-10-01 15:59:05.625139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-10-01 15:59:05.625152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.650 [2024-10-01 15:59:05.625160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.650 [2024-10-01 15:59:05.625305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-10-01 15:59:05.625317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.650 [2024-10-01 15:59:05.625324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.650 [2024-10-01 15:59:05.625336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.650 [2024-10-01 15:59:05.625346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.650 [2024-10-01 15:59:05.625356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.650 [2024-10-01 15:59:05.625363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.650 [2024-10-01 15:59:05.625369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.650 [2024-10-01 15:59:05.625378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.650 [2024-10-01 15:59:05.625384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.650 [2024-10-01 15:59:05.625391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.650 [2024-10-01 15:59:05.625843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.650 [2024-10-01 15:59:05.625854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.650 [2024-10-01 15:59:05.635861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.650 [2024-10-01 15:59:05.635886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.650 [2024-10-01 15:59:05.636008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-10-01 15:59:05.636020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.650 [2024-10-01 15:59:05.636028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.650 [2024-10-01 15:59:05.636223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-10-01 15:59:05.636235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.650 [2024-10-01 15:59:05.636243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.650 [2024-10-01 15:59:05.636255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.650 [2024-10-01 15:59:05.636264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.650 [2024-10-01 15:59:05.636275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.650 [2024-10-01 15:59:05.636282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.651 [2024-10-01 15:59:05.636289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.651 [2024-10-01 15:59:05.636298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.651 [2024-10-01 15:59:05.636305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.651 [2024-10-01 15:59:05.636311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.651 [2024-10-01 15:59:05.636325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.651 [2024-10-01 15:59:05.636332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.651 [2024-10-01 15:59:05.647849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.651 [2024-10-01 15:59:05.647877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.651 [2024-10-01 15:59:05.648272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-10-01 15:59:05.648290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.651 [2024-10-01 15:59:05.648298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.651 [2024-10-01 15:59:05.648444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-10-01 15:59:05.648455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.651 [2024-10-01 15:59:05.648463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.651 [2024-10-01 15:59:05.648568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.651 [2024-10-01 15:59:05.648581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.651 [2024-10-01 15:59:05.648856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.651 [2024-10-01 15:59:05.648878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.651 [2024-10-01 15:59:05.648885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.651 [2024-10-01 15:59:05.648895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.651 [2024-10-01 15:59:05.648902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.651 [2024-10-01 15:59:05.648908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.651 [2024-10-01 15:59:05.648949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.651 [2024-10-01 15:59:05.648958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.651 [2024-10-01 15:59:05.658522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.651 [2024-10-01 15:59:05.658545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.651 [2024-10-01 15:59:05.658710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-10-01 15:59:05.658724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.651 [2024-10-01 15:59:05.658731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.651 [2024-10-01 15:59:05.658875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-10-01 15:59:05.658887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.651 [2024-10-01 15:59:05.658896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.651 [2024-10-01 15:59:05.659029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.651 [2024-10-01 15:59:05.659042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.651 [2024-10-01 15:59:05.659069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.651 [2024-10-01 15:59:05.659076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.651 [2024-10-01 15:59:05.659083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.651 [2024-10-01 15:59:05.659093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.651 [2024-10-01 15:59:05.659099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.651 [2024-10-01 15:59:05.659105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.651 [2024-10-01 15:59:05.659119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.651 [2024-10-01 15:59:05.659126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.651 [2024-10-01 15:59:05.668955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.651 [2024-10-01 15:59:05.668977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.651 [2024-10-01 15:59:05.669218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-10-01 15:59:05.669233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.651 [2024-10-01 15:59:05.669241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.651 [2024-10-01 15:59:05.669375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-10-01 15:59:05.669385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.651 [2024-10-01 15:59:05.669392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.651 [2024-10-01 15:59:05.669523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.651 [2024-10-01 15:59:05.669536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.651 [2024-10-01 15:59:05.669562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.651 [2024-10-01 15:59:05.669569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.651 [2024-10-01 15:59:05.669577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.651 [2024-10-01 15:59:05.669586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.651 [2024-10-01 15:59:05.669592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.651 [2024-10-01 15:59:05.669599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.651 [2024-10-01 15:59:05.669612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.651 [2024-10-01 15:59:05.669619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.651 [2024-10-01 15:59:05.680118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.651 [2024-10-01 15:59:05.680139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.651 [2024-10-01 15:59:05.680350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-10-01 15:59:05.680363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.651 [2024-10-01 15:59:05.680372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.651 [2024-10-01 15:59:05.680589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-10-01 15:59:05.680600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.651 [2024-10-01 15:59:05.680607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.651 [2024-10-01 15:59:05.680620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.651 [2024-10-01 15:59:05.680630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.651 [2024-10-01 15:59:05.680641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.651 [2024-10-01 15:59:05.680647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.651 [2024-10-01 15:59:05.680654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.651 [2024-10-01 15:59:05.680662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.651 [2024-10-01 15:59:05.680669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.651 [2024-10-01 15:59:05.680676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.651 [2024-10-01 15:59:05.680818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.651 [2024-10-01 15:59:05.680829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.651 [2024-10-01 15:59:05.690496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.651 [2024-10-01 15:59:05.690517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.651 [2024-10-01 15:59:05.690729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-10-01 15:59:05.690743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.652 [2024-10-01 15:59:05.690750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.652 [2024-10-01 15:59:05.690879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-10-01 15:59:05.690890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.652 [2024-10-01 15:59:05.690898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.652 [2024-10-01 15:59:05.690910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.652 [2024-10-01 15:59:05.690919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.652 [2024-10-01 15:59:05.690929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.652 [2024-10-01 15:59:05.690937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.652 [2024-10-01 15:59:05.690945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.652 [2024-10-01 15:59:05.690954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.652 [2024-10-01 15:59:05.690960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.652 [2024-10-01 15:59:05.690966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.652 [2024-10-01 15:59:05.690980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.652 [2024-10-01 15:59:05.690987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.652 [2024-10-01 15:59:05.700577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.652 [2024-10-01 15:59:05.700607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.652 [2024-10-01 15:59:05.700724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-10-01 15:59:05.700737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.652 [2024-10-01 15:59:05.700745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.652 [2024-10-01 15:59:05.700891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-10-01 15:59:05.700903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.652 [2024-10-01 15:59:05.700910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.652 [2024-10-01 15:59:05.700919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.652 [2024-10-01 15:59:05.700931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.652 [2024-10-01 15:59:05.700939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.652 [2024-10-01 15:59:05.700946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.652 [2024-10-01 15:59:05.700956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.652 [2024-10-01 15:59:05.700969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.652 [2024-10-01 15:59:05.700976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.652 [2024-10-01 15:59:05.700982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.652 [2024-10-01 15:59:05.700987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.652 [2024-10-01 15:59:05.701000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.652 [2024-10-01 15:59:05.711824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.652 [2024-10-01 15:59:05.711846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.652 [2024-10-01 15:59:05.712000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-10-01 15:59:05.712014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.652 [2024-10-01 15:59:05.712022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.652 [2024-10-01 15:59:05.712172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-10-01 15:59:05.712183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.652 [2024-10-01 15:59:05.712191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.652 [2024-10-01 15:59:05.712468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.652 [2024-10-01 15:59:05.712484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.652 [2024-10-01 15:59:05.712634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.652 [2024-10-01 15:59:05.712646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.652 [2024-10-01 15:59:05.712653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.652 [2024-10-01 15:59:05.712662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.652 [2024-10-01 15:59:05.712669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.652 [2024-10-01 15:59:05.712675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.652 [2024-10-01 15:59:05.712817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.652 [2024-10-01 15:59:05.712828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.652 [2024-10-01 15:59:05.722444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.652 [2024-10-01 15:59:05.722466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.652 [2024-10-01 15:59:05.722710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-10-01 15:59:05.722724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.652 [2024-10-01 15:59:05.722732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.652 [2024-10-01 15:59:05.722900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-10-01 15:59:05.722911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.652 [2024-10-01 15:59:05.722922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.652 [2024-10-01 15:59:05.722935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.652 [2024-10-01 15:59:05.722944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.652 [2024-10-01 15:59:05.722954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.652 [2024-10-01 15:59:05.722961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.652 [2024-10-01 15:59:05.722967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.652 [2024-10-01 15:59:05.722981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.652 [2024-10-01 15:59:05.722987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.652 [2024-10-01 15:59:05.722994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.652 [2024-10-01 15:59:05.723008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.652 [2024-10-01 15:59:05.723014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.652 [2024-10-01 15:59:05.733916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.652 [2024-10-01 15:59:05.733938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.652 [2024-10-01 15:59:05.734144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-10-01 15:59:05.734157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.652 [2024-10-01 15:59:05.734165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.652 [2024-10-01 15:59:05.734325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-10-01 15:59:05.734335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.652 [2024-10-01 15:59:05.734343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.652 [2024-10-01 15:59:05.734354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.652 [2024-10-01 15:59:05.734363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.652 [2024-10-01 15:59:05.734374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.652 [2024-10-01 15:59:05.734381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.652 [2024-10-01 15:59:05.734388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.652 [2024-10-01 15:59:05.734396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.652 [2024-10-01 15:59:05.734402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.652 [2024-10-01 15:59:05.734408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.652 [2024-10-01 15:59:05.734422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.652 [2024-10-01 15:59:05.734429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.652 [2024-10-01 15:59:05.745410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.652 [2024-10-01 15:59:05.745435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.652 [2024-10-01 15:59:05.745647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-10-01 15:59:05.745660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.652 [2024-10-01 15:59:05.745668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.652 [2024-10-01 15:59:05.745858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-10-01 15:59:05.745875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.653 [2024-10-01 15:59:05.745883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.653 [2024-10-01 15:59:05.745895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.653 [2024-10-01 15:59:05.745905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.653 [2024-10-01 15:59:05.745915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.653 [2024-10-01 15:59:05.745921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.653 [2024-10-01 15:59:05.745928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.653 [2024-10-01 15:59:05.745936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.653 [2024-10-01 15:59:05.745943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.653 [2024-10-01 15:59:05.745950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.653 [2024-10-01 15:59:05.745963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.653 [2024-10-01 15:59:05.745970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.653 [2024-10-01 15:59:05.758774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.653 [2024-10-01 15:59:05.758797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.653 [2024-10-01 15:59:05.759135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-10-01 15:59:05.759152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.653 [2024-10-01 15:59:05.759160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.653 [2024-10-01 15:59:05.759353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-10-01 15:59:05.759365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.653 [2024-10-01 15:59:05.759372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.653 [2024-10-01 15:59:05.759622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.653 [2024-10-01 15:59:05.759637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.653 [2024-10-01 15:59:05.759796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.653 [2024-10-01 15:59:05.759808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.653 [2024-10-01 15:59:05.759815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.653 [2024-10-01 15:59:05.759830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.653 [2024-10-01 15:59:05.759836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.653 [2024-10-01 15:59:05.759843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.653 [2024-10-01 15:59:05.759877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.653 [2024-10-01 15:59:05.759885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.653 [2024-10-01 15:59:05.769724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.653 [2024-10-01 15:59:05.769746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.653 [2024-10-01 15:59:05.769975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-10-01 15:59:05.769991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.653 [2024-10-01 15:59:05.769998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.653 [2024-10-01 15:59:05.770193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-10-01 15:59:05.770205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.653 [2024-10-01 15:59:05.770212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.653 [2024-10-01 15:59:05.770452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.653 [2024-10-01 15:59:05.770468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.653 [2024-10-01 15:59:05.770505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.653 [2024-10-01 15:59:05.770514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.653 [2024-10-01 15:59:05.770520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.653 [2024-10-01 15:59:05.770529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.653 [2024-10-01 15:59:05.770535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.653 [2024-10-01 15:59:05.770542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.653 [2024-10-01 15:59:05.770671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.653 [2024-10-01 15:59:05.770681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.653 [2024-10-01 15:59:05.780709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.653 [2024-10-01 15:59:05.780732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.653 [2024-10-01 15:59:05.780959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-10-01 15:59:05.780974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.653 [2024-10-01 15:59:05.780982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.653 [2024-10-01 15:59:05.781120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-10-01 15:59:05.781131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.653 [2024-10-01 15:59:05.781139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.653 [2024-10-01 15:59:05.781382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.653 [2024-10-01 15:59:05.781397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.653 [2024-10-01 15:59:05.781433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.653 [2024-10-01 15:59:05.781442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.653 [2024-10-01 15:59:05.781449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.653 [2024-10-01 15:59:05.781458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.653 [2024-10-01 15:59:05.781464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.653 [2024-10-01 15:59:05.781472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.653 [2024-10-01 15:59:05.781601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.653 [2024-10-01 15:59:05.781611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.653 [2024-10-01 15:59:05.791766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.653 [2024-10-01 15:59:05.791788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.653 [2024-10-01 15:59:05.791915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-10-01 15:59:05.791930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.653 [2024-10-01 15:59:05.791938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.653 [2024-10-01 15:59:05.792157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-10-01 15:59:05.792168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.653 [2024-10-01 15:59:05.792175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.653 [2024-10-01 15:59:05.792336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.653 [2024-10-01 15:59:05.792350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.653 [2024-10-01 15:59:05.792493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.653 [2024-10-01 15:59:05.792503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.653 [2024-10-01 15:59:05.792510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.653 [2024-10-01 15:59:05.792519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.653 [2024-10-01 15:59:05.792526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.653 [2024-10-01 15:59:05.792533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.653 [2024-10-01 15:59:05.792679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.653 [2024-10-01 15:59:05.792689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.653 [2024-10-01 15:59:05.803102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.653 [2024-10-01 15:59:05.803124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.653 [2024-10-01 15:59:05.803497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-10-01 15:59:05.803514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.653 [2024-10-01 15:59:05.803522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.653 [2024-10-01 15:59:05.803664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-10-01 15:59:05.803675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.653 [2024-10-01 15:59:05.803682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.653 [2024-10-01 15:59:05.803970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.653 [2024-10-01 15:59:05.803986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.654 [2024-10-01 15:59:05.804025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.654 [2024-10-01 15:59:05.804034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.654 [2024-10-01 15:59:05.804041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.654 [2024-10-01 15:59:05.804050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.654 [2024-10-01 15:59:05.804056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.654 [2024-10-01 15:59:05.804063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.654 [2024-10-01 15:59:05.804191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.654 [2024-10-01 15:59:05.804202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.654 [2024-10-01 15:59:05.814162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.654 [2024-10-01 15:59:05.814185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.654 [2024-10-01 15:59:05.814510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-10-01 15:59:05.814527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.654 [2024-10-01 15:59:05.814535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.654 [2024-10-01 15:59:05.814750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-10-01 15:59:05.814762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.654 [2024-10-01 15:59:05.814770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.654 [2024-10-01 15:59:05.815039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.654 [2024-10-01 15:59:05.815055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.654 [2024-10-01 15:59:05.815092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.654 [2024-10-01 15:59:05.815101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.654 [2024-10-01 15:59:05.815108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.654 [2024-10-01 15:59:05.815117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.654 [2024-10-01 15:59:05.815127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.654 [2024-10-01 15:59:05.815134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.654 [2024-10-01 15:59:05.815263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.654 [2024-10-01 15:59:05.815273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.654 [2024-10-01 15:59:05.826580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.654 [2024-10-01 15:59:05.826603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.654 [2024-10-01 15:59:05.826841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-10-01 15:59:05.826854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.654 [2024-10-01 15:59:05.826866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.654 [2024-10-01 15:59:05.826963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-10-01 15:59:05.826975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.654 [2024-10-01 15:59:05.826982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.654 [2024-10-01 15:59:05.826993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.654 [2024-10-01 15:59:05.827003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.654 [2024-10-01 15:59:05.827022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.654 [2024-10-01 15:59:05.827030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.654 [2024-10-01 15:59:05.827037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.654 [2024-10-01 15:59:05.827046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.654 [2024-10-01 15:59:05.827052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.654 [2024-10-01 15:59:05.827059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.654 [2024-10-01 15:59:05.827073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.654 [2024-10-01 15:59:05.827080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.654 [2024-10-01 15:59:05.837733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.654 [2024-10-01 15:59:05.837755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.654 [2024-10-01 15:59:05.837848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-10-01 15:59:05.837868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.654 [2024-10-01 15:59:05.837876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.654 [2024-10-01 15:59:05.838072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-10-01 15:59:05.838083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.654 [2024-10-01 15:59:05.838091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.654 [2024-10-01 15:59:05.838104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.654 [2024-10-01 15:59:05.838117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.654 [2024-10-01 15:59:05.838127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.654 [2024-10-01 15:59:05.838133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.654 [2024-10-01 15:59:05.838140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.654 [2024-10-01 15:59:05.838149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.654 [2024-10-01 15:59:05.838156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.654 [2024-10-01 15:59:05.838162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.654 [2024-10-01 15:59:05.838176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.654 [2024-10-01 15:59:05.838182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.654 [2024-10-01 15:59:05.849244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.654 [2024-10-01 15:59:05.849267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.654 [2024-10-01 15:59:05.849647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-10-01 15:59:05.849665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.654 [2024-10-01 15:59:05.849673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.654 [2024-10-01 15:59:05.849816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-10-01 15:59:05.849827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.654 [2024-10-01 15:59:05.849834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.654 [2024-10-01 15:59:05.849983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.654 [2024-10-01 15:59:05.849997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.654 [2024-10-01 15:59:05.850341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.654 [2024-10-01 15:59:05.850353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.654 [2024-10-01 15:59:05.850361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.654 [2024-10-01 15:59:05.850370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.654 [2024-10-01 15:59:05.850377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.654 [2024-10-01 15:59:05.850383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.654 [2024-10-01 15:59:05.850540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.654 [2024-10-01 15:59:05.850551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.654 [2024-10-01 15:59:05.859433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.654 [2024-10-01 15:59:05.859454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.654 [2024-10-01 15:59:05.859619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-10-01 15:59:05.859635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.654 [2024-10-01 15:59:05.859642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.654 [2024-10-01 15:59:05.859835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-10-01 15:59:05.859846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.654 [2024-10-01 15:59:05.859853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.654 [2024-10-01 15:59:05.859870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.654 [2024-10-01 15:59:05.859880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.654 [2024-10-01 15:59:05.859890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.654 [2024-10-01 15:59:05.859896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.654 [2024-10-01 15:59:05.859903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.655 [2024-10-01 15:59:05.859911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.655 [2024-10-01 15:59:05.859917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.655 [2024-10-01 15:59:05.859924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.655 [2024-10-01 15:59:05.859938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.655 [2024-10-01 15:59:05.859945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.655 [2024-10-01 15:59:05.870908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.655 [2024-10-01 15:59:05.870931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.655 [2024-10-01 15:59:05.871165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-10-01 15:59:05.871179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.655 [2024-10-01 15:59:05.871186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.655 [2024-10-01 15:59:05.871328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-10-01 15:59:05.871338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.655 [2024-10-01 15:59:05.871346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.655 [2024-10-01 15:59:05.871358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.655 [2024-10-01 15:59:05.871368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.655 [2024-10-01 15:59:05.871378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.655 [2024-10-01 15:59:05.871385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.655 [2024-10-01 15:59:05.871392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.655 [2024-10-01 15:59:05.871402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.655 [2024-10-01 15:59:05.871408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.655 [2024-10-01 15:59:05.871417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.655 [2024-10-01 15:59:05.871431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.655 [2024-10-01 15:59:05.871438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.655 [2024-10-01 15:59:05.883053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.655 [2024-10-01 15:59:05.883075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.655 [2024-10-01 15:59:05.883378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-10-01 15:59:05.883395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.655 [2024-10-01 15:59:05.883403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.655 [2024-10-01 15:59:05.883546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-10-01 15:59:05.883557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.655 [2024-10-01 15:59:05.883564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.655 [2024-10-01 15:59:05.883920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.655 [2024-10-01 15:59:05.883938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.655 [2024-10-01 15:59:05.884090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.655 [2024-10-01 15:59:05.884102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.655 [2024-10-01 15:59:05.884108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.655 [2024-10-01 15:59:05.884119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.655 [2024-10-01 15:59:05.884126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.655 [2024-10-01 15:59:05.884132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.655 [2024-10-01 15:59:05.884274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.655 [2024-10-01 15:59:05.884286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.655 [2024-10-01 15:59:05.894177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.655 [2024-10-01 15:59:05.894199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.655 [2024-10-01 15:59:05.894434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-10-01 15:59:05.894448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.655 [2024-10-01 15:59:05.894456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.655 [2024-10-01 15:59:05.894585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-10-01 15:59:05.894595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.655 [2024-10-01 15:59:05.894602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.655 [2024-10-01 15:59:05.894962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.655 [2024-10-01 15:59:05.894979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.655 [2024-10-01 15:59:05.895142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.655 [2024-10-01 15:59:05.895155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.655 [2024-10-01 15:59:05.895162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.655 [2024-10-01 15:59:05.895172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.655 [2024-10-01 15:59:05.895179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.655 [2024-10-01 15:59:05.895186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.655 [2024-10-01 15:59:05.895418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.655 [2024-10-01 15:59:05.895430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.655 [2024-10-01 15:59:05.905588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.655 [2024-10-01 15:59:05.905609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.655 [2024-10-01 15:59:05.905805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-10-01 15:59:05.905819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.655 [2024-10-01 15:59:05.905828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.655 [2024-10-01 15:59:05.905911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-10-01 15:59:05.905923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.655 [2024-10-01 15:59:05.905930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.655 [2024-10-01 15:59:05.905942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.655 [2024-10-01 15:59:05.905952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.655 [2024-10-01 15:59:05.905963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.655 [2024-10-01 15:59:05.905970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.655 [2024-10-01 15:59:05.905977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.655 [2024-10-01 15:59:05.905986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.655 [2024-10-01 15:59:05.905992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.655 [2024-10-01 15:59:05.905998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.655 [2024-10-01 15:59:05.906448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.655 [2024-10-01 15:59:05.906460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.655 [2024-10-01 15:59:05.916470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.655 [2024-10-01 15:59:05.916492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.655 [2024-10-01 15:59:05.916674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-10-01 15:59:05.916687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.655 [2024-10-01 15:59:05.916699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.655 [2024-10-01 15:59:05.916839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-10-01 15:59:05.916850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.655 [2024-10-01 15:59:05.916857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.655 [2024-10-01 15:59:05.916874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.655 [2024-10-01 15:59:05.916884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.655 [2024-10-01 15:59:05.916894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.655 [2024-10-01 15:59:05.916902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.655 [2024-10-01 15:59:05.916908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.655 [2024-10-01 15:59:05.916918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.656 [2024-10-01 15:59:05.916923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.656 [2024-10-01 15:59:05.916930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.656 [2024-10-01 15:59:05.916943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.656 [2024-10-01 15:59:05.916951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.656 [2024-10-01 15:59:05.928810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.656 [2024-10-01 15:59:05.928834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.656 [2024-10-01 15:59:05.929187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-10-01 15:59:05.929205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.656 [2024-10-01 15:59:05.929214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.656 [2024-10-01 15:59:05.929364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-10-01 15:59:05.929374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.656 [2024-10-01 15:59:05.929381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.656 [2024-10-01 15:59:05.929461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.656 [2024-10-01 15:59:05.929473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.656 [2024-10-01 15:59:05.929640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.656 [2024-10-01 15:59:05.929651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.656 [2024-10-01 15:59:05.929659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.656 [2024-10-01 15:59:05.929670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.656 [2024-10-01 15:59:05.929676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.656 [2024-10-01 15:59:05.929683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.656 [2024-10-01 15:59:05.930484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.656 [2024-10-01 15:59:05.930500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.656 [2024-10-01 15:59:05.939228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.656 [2024-10-01 15:59:05.939250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.656 [2024-10-01 15:59:05.939400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-10-01 15:59:05.939414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.656 [2024-10-01 15:59:05.939422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.656 [2024-10-01 15:59:05.939506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-10-01 15:59:05.939517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.656 [2024-10-01 15:59:05.939524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.656 [2024-10-01 15:59:05.939537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.656 [2024-10-01 15:59:05.939547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.656 [2024-10-01 15:59:05.939557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.656 [2024-10-01 15:59:05.939563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.656 [2024-10-01 15:59:05.939570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.656 [2024-10-01 15:59:05.939579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.656 [2024-10-01 15:59:05.939585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.656 [2024-10-01 15:59:05.939592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.656 [2024-10-01 15:59:05.939606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.656 [2024-10-01 15:59:05.939614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.656 [2024-10-01 15:59:05.950291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.656 [2024-10-01 15:59:05.950316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.656 [2024-10-01 15:59:05.950490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-10-01 15:59:05.950504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.656 [2024-10-01 15:59:05.950513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.656 [2024-10-01 15:59:05.950704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-10-01 15:59:05.950715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.656 [2024-10-01 15:59:05.950722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.656 [2024-10-01 15:59:05.950851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.656 [2024-10-01 15:59:05.950870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.656 [2024-10-01 15:59:05.951217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.656 [2024-10-01 15:59:05.951236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.656 [2024-10-01 15:59:05.951243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.656 [2024-10-01 15:59:05.951253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.656 [2024-10-01 15:59:05.951260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.656 [2024-10-01 15:59:05.951266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.656 [2024-10-01 15:59:05.951422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.656 [2024-10-01 15:59:05.951433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.656 [2024-10-01 15:59:05.961266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.656 [2024-10-01 15:59:05.961289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.656 [2024-10-01 15:59:05.961401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-10-01 15:59:05.961414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.656 [2024-10-01 15:59:05.961421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.656 [2024-10-01 15:59:05.961623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-10-01 15:59:05.961635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.656 [2024-10-01 15:59:05.961642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.656 [2024-10-01 15:59:05.962093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.656 [2024-10-01 15:59:05.962110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.656 [2024-10-01 15:59:05.962309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.656 [2024-10-01 15:59:05.962321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.656 [2024-10-01 15:59:05.962328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.656 [2024-10-01 15:59:05.962337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.656 [2024-10-01 15:59:05.962344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.656 [2024-10-01 15:59:05.962351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.656 [2024-10-01 15:59:05.962384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.656 [2024-10-01 15:59:05.962392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.656 [2024-10-01 15:59:05.972602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.656 [2024-10-01 15:59:05.972624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.656 [2024-10-01 15:59:05.972728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-10-01 15:59:05.972742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.656 [2024-10-01 15:59:05.972750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.656 [2024-10-01 15:59:05.972925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-10-01 15:59:05.972937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.656 [2024-10-01 15:59:05.972944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.656 [2024-10-01 15:59:05.972957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.656 [2024-10-01 15:59:05.972966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.656 [2024-10-01 15:59:05.972977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.656 [2024-10-01 15:59:05.972984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.657 [2024-10-01 15:59:05.972992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.657 [2024-10-01 15:59:05.973005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.657 [2024-10-01 15:59:05.973013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.657 [2024-10-01 15:59:05.973019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.657 [2024-10-01 15:59:05.973468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.657 [2024-10-01 15:59:05.973479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.657 [2024-10-01 15:59:05.983595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.657 [2024-10-01 15:59:05.983618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.657 [2024-10-01 15:59:05.983723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-10-01 15:59:05.983740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.657 [2024-10-01 15:59:05.983749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.657 [2024-10-01 15:59:05.983902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-10-01 15:59:05.983914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.657 [2024-10-01 15:59:05.983922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.657 [2024-10-01 15:59:05.983934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.657 [2024-10-01 15:59:05.983945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.657 [2024-10-01 15:59:05.983955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.657 [2024-10-01 15:59:05.983962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.657 [2024-10-01 15:59:05.983969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.657 [2024-10-01 15:59:05.983978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.657 [2024-10-01 15:59:05.983984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.657 [2024-10-01 15:59:05.983990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.657 [2024-10-01 15:59:05.984006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.657 [2024-10-01 15:59:05.984013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.657 [2024-10-01 15:59:05.995483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.657 [2024-10-01 15:59:05.995506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.657 [2024-10-01 15:59:05.995737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-10-01 15:59:05.995751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.657 [2024-10-01 15:59:05.995759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.657 [2024-10-01 15:59:05.995951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-10-01 15:59:05.995962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.657 [2024-10-01 15:59:05.995970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.657 [2024-10-01 15:59:05.996209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.657 [2024-10-01 15:59:05.996224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.657 [2024-10-01 15:59:05.996373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.657 [2024-10-01 15:59:05.996385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.657 [2024-10-01 15:59:05.996392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.657 [2024-10-01 15:59:05.996402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.657 [2024-10-01 15:59:05.996410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.657 [2024-10-01 15:59:05.996416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.657 [2024-10-01 15:59:05.996446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.657 [2024-10-01 15:59:05.996454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.657 11386.21 IOPS, 44.48 MiB/s [2024-10-01 15:59:06.008273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.657 [2024-10-01 15:59:06.008292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.657 [2024-10-01 15:59:06.008414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-10-01 15:59:06.008428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.657 [2024-10-01 15:59:06.008436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.657 [2024-10-01 15:59:06.008532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-10-01 15:59:06.008542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.657 [2024-10-01 15:59:06.008549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.657 [2024-10-01 15:59:06.009666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.657 [2024-10-01 15:59:06.009685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.657 [2024-10-01 15:59:06.010037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.657 [2024-10-01 15:59:06.010050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.657 [2024-10-01 15:59:06.010061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.657 [2024-10-01 15:59:06.010071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.657 [2024-10-01 15:59:06.010078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.657 [2024-10-01 15:59:06.010085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.657 [2024-10-01 15:59:06.010302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.657 [2024-10-01 15:59:06.010314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.657 [2024-10-01 15:59:06.018356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.657 [2024-10-01 15:59:06.018378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.657 [2024-10-01 15:59:06.018489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-10-01 15:59:06.018505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.657 [2024-10-01 15:59:06.018513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.657 [2024-10-01 15:59:06.018603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-10-01 15:59:06.018613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.657 [2024-10-01 15:59:06.018621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.657 [2024-10-01 15:59:06.018633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.657 [2024-10-01 15:59:06.018643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.657 [2024-10-01 15:59:06.018653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.657 [2024-10-01 15:59:06.018660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.657 [2024-10-01 15:59:06.018667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.657 [2024-10-01 15:59:06.018676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.657 [2024-10-01 15:59:06.018682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.657 [2024-10-01 15:59:06.018689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.657 [2024-10-01 15:59:06.018703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.657 [2024-10-01 15:59:06.018709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.657 [2024-10-01 15:59:06.030807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.657 [2024-10-01 15:59:06.030830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.657 [2024-10-01 15:59:06.031145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-10-01 15:59:06.031162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.657 [2024-10-01 15:59:06.031171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.657 [2024-10-01 15:59:06.031316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-10-01 15:59:06.031331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.657 [2024-10-01 15:59:06.031339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.657 [2024-10-01 15:59:06.031522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.657 [2024-10-01 15:59:06.031538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.657 [2024-10-01 15:59:06.031679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.657 [2024-10-01 15:59:06.031691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.657 [2024-10-01 15:59:06.031698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.657 [2024-10-01 15:59:06.031708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.657 [2024-10-01 15:59:06.031715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.657 [2024-10-01 15:59:06.031721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.657 [2024-10-01 15:59:06.031752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.657 [2024-10-01 15:59:06.031762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.657 [2024-10-01 15:59:06.041537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.657 [2024-10-01 15:59:06.041560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.657 [2024-10-01 15:59:06.041807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-10-01 15:59:06.041823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.657 [2024-10-01 15:59:06.041832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.657 [2024-10-01 15:59:06.041927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-10-01 15:59:06.041939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.658 [2024-10-01 15:59:06.041946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.658 [2024-10-01 15:59:06.042091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.658 [2024-10-01 15:59:06.042104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.658 [2024-10-01 15:59:06.042130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.658 [2024-10-01 15:59:06.042139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.658 [2024-10-01 15:59:06.042145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.658 [2024-10-01 15:59:06.042155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.658 [2024-10-01 15:59:06.042161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.658 [2024-10-01 15:59:06.042169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.658 [2024-10-01 15:59:06.042183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.658 [2024-10-01 15:59:06.042190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.658 [2024-10-01 15:59:06.053426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.658 [2024-10-01 15:59:06.053451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.658 [2024-10-01 15:59:06.053612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-10-01 15:59:06.053626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.658 [2024-10-01 15:59:06.053634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.658 [2024-10-01 15:59:06.053764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-10-01 15:59:06.053775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.658 [2024-10-01 15:59:06.053782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.658 [2024-10-01 15:59:06.053794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.658 [2024-10-01 15:59:06.053804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.658 [2024-10-01 15:59:06.053815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.658 [2024-10-01 15:59:06.053821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.658 [2024-10-01 15:59:06.053828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.658 [2024-10-01 15:59:06.053836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.658 [2024-10-01 15:59:06.053842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.658 [2024-10-01 15:59:06.053849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.658 [2024-10-01 15:59:06.053869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.658 [2024-10-01 15:59:06.053877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.658 [2024-10-01 15:59:06.065389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.658 [2024-10-01 15:59:06.065412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.658 [2024-10-01 15:59:06.065548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-10-01 15:59:06.065562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.658 [2024-10-01 15:59:06.065570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.658 [2024-10-01 15:59:06.065724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-10-01 15:59:06.065735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.658 [2024-10-01 15:59:06.065743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.658 [2024-10-01 15:59:06.066088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.658 [2024-10-01 15:59:06.066105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.658 [2024-10-01 15:59:06.066377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.658 [2024-10-01 15:59:06.066389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.658 [2024-10-01 15:59:06.066396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.658 [2024-10-01 15:59:06.066409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.658 [2024-10-01 15:59:06.066416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.658 [2024-10-01 15:59:06.066423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.658 [2024-10-01 15:59:06.066576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.658 [2024-10-01 15:59:06.066587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.658 [2024-10-01 15:59:06.076393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.658 [2024-10-01 15:59:06.076416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.658 [2024-10-01 15:59:06.076761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-10-01 15:59:06.076779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.658 [2024-10-01 15:59:06.076787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.658 [2024-10-01 15:59:06.076887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-10-01 15:59:06.076898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.658 [2024-10-01 15:59:06.076906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.658 [2024-10-01 15:59:06.077051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.658 [2024-10-01 15:59:06.077065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.658 [2024-10-01 15:59:06.077092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.658 [2024-10-01 15:59:06.077101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.658 [2024-10-01 15:59:06.077107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.658 [2024-10-01 15:59:06.077116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.658 [2024-10-01 15:59:06.077122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.658 [2024-10-01 15:59:06.077129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.658 [2024-10-01 15:59:06.077143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.658 [2024-10-01 15:59:06.077151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.658 [2024-10-01 15:59:06.087442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.658 [2024-10-01 15:59:06.087465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.658 [2024-10-01 15:59:06.087750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-10-01 15:59:06.087768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.658 [2024-10-01 15:59:06.087777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.658 [2024-10-01 15:59:06.087926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-10-01 15:59:06.087938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.658 [2024-10-01 15:59:06.087948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.658 [2024-10-01 15:59:06.088203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.658 [2024-10-01 15:59:06.088217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.658 [2024-10-01 15:59:06.088377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.658 [2024-10-01 15:59:06.088390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.658 [2024-10-01 15:59:06.088397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.658 [2024-10-01 15:59:06.088407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.658 [2024-10-01 15:59:06.088414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.658 [2024-10-01 15:59:06.088421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.658 [2024-10-01 15:59:06.088451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.658 [2024-10-01 15:59:06.088459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.658 [2024-10-01 15:59:06.098161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.658 [2024-10-01 15:59:06.098321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.658 [2024-10-01 15:59:06.098428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-10-01 15:59:06.098444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.658 [2024-10-01 15:59:06.098452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.658 [2024-10-01 15:59:06.098566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-10-01 15:59:06.098579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.658 [2024-10-01 15:59:06.098587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.658 [2024-10-01 15:59:06.098595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.658 [2024-10-01 15:59:06.098725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.658 [2024-10-01 15:59:06.098736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.658 [2024-10-01 15:59:06.098743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.658 [2024-10-01 15:59:06.098750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.658 [2024-10-01 15:59:06.098781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.658 [2024-10-01 15:59:06.098789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.658 [2024-10-01 15:59:06.098795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.659 [2024-10-01 15:59:06.098803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.659 [2024-10-01 15:59:06.098815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.659 [2024-10-01 15:59:06.109718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.659 [2024-10-01 15:59:06.109740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.659 [2024-10-01 15:59:06.109907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-10-01 15:59:06.109922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.659 [2024-10-01 15:59:06.109931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.659 [2024-10-01 15:59:06.110066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-10-01 15:59:06.110077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.659 [2024-10-01 15:59:06.110085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.659 [2024-10-01 15:59:06.110097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.659 [2024-10-01 15:59:06.110109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.659 [2024-10-01 15:59:06.110120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.659 [2024-10-01 15:59:06.110127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.659 [2024-10-01 15:59:06.110134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.659 [2024-10-01 15:59:06.110143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.659 [2024-10-01 15:59:06.110150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.659 [2024-10-01 15:59:06.110157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.659 [2024-10-01 15:59:06.110172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.659 [2024-10-01 15:59:06.110179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.659 [2024-10-01 15:59:06.121316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.659 [2024-10-01 15:59:06.121339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.659 [2024-10-01 15:59:06.121462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-10-01 15:59:06.121475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.659 [2024-10-01 15:59:06.121484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.659 [2024-10-01 15:59:06.121652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-10-01 15:59:06.121664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.659 [2024-10-01 15:59:06.121672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.659 [2024-10-01 15:59:06.121684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.659 [2024-10-01 15:59:06.121694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.659 [2024-10-01 15:59:06.121704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.659 [2024-10-01 15:59:06.121711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.659 [2024-10-01 15:59:06.121718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.659 [2024-10-01 15:59:06.121727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.659 [2024-10-01 15:59:06.121736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.659 [2024-10-01 15:59:06.121743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.659 [2024-10-01 15:59:06.121758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.659 [2024-10-01 15:59:06.121765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.659 [2024-10-01 15:59:06.133249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.659 [2024-10-01 15:59:06.133272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.659 [2024-10-01 15:59:06.133574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-10-01 15:59:06.133592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.659 [2024-10-01 15:59:06.133600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.659 [2024-10-01 15:59:06.133694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-10-01 15:59:06.133706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.659 [2024-10-01 15:59:06.133713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.659 [2024-10-01 15:59:06.133897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.659 [2024-10-01 15:59:06.133912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.659 [2024-10-01 15:59:06.133939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.659 [2024-10-01 15:59:06.133947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.659 [2024-10-01 15:59:06.133954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.659 [2024-10-01 15:59:06.133963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.659 [2024-10-01 15:59:06.133969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.659 [2024-10-01 15:59:06.133976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.659 [2024-10-01 15:59:06.133990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.659 [2024-10-01 15:59:06.133997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.659 [2024-10-01 15:59:06.143629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.659 [2024-10-01 15:59:06.143652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.659 [2024-10-01 15:59:06.143804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-10-01 15:59:06.143818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.659 [2024-10-01 15:59:06.143826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.659 [2024-10-01 15:59:06.143934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-10-01 15:59:06.143945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.659 [2024-10-01 15:59:06.143953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.659 [2024-10-01 15:59:06.143969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.659 [2024-10-01 15:59:06.143978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.659 [2024-10-01 15:59:06.143988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.659 [2024-10-01 15:59:06.143995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.659 [2024-10-01 15:59:06.144001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.659 [2024-10-01 15:59:06.144011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.659 [2024-10-01 15:59:06.144018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.659 [2024-10-01 15:59:06.144025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.659 [2024-10-01 15:59:06.144039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.659 [2024-10-01 15:59:06.144046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.659 [2024-10-01 15:59:06.155236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.659 [2024-10-01 15:59:06.155259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.659 [2024-10-01 15:59:06.155438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-10-01 15:59:06.155453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.659 [2024-10-01 15:59:06.155462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.659 [2024-10-01 15:59:06.155567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-10-01 15:59:06.155578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.659 [2024-10-01 15:59:06.155586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.659 [2024-10-01 15:59:06.155715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.659 [2024-10-01 15:59:06.155728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.659 [2024-10-01 15:59:06.155755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.659 [2024-10-01 15:59:06.155762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.659 [2024-10-01 15:59:06.155770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.659 [2024-10-01 15:59:06.155780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.659 [2024-10-01 15:59:06.155787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.659 [2024-10-01 15:59:06.155793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.659 [2024-10-01 15:59:06.155808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.659 [2024-10-01 15:59:06.155815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.659 [2024-10-01 15:59:06.166328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.659 [2024-10-01 15:59:06.166352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.659 [2024-10-01 15:59:06.166500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-10-01 15:59:06.166518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.659 [2024-10-01 15:59:06.166526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.659 [2024-10-01 15:59:06.166621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-10-01 15:59:06.166632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.659 [2024-10-01 15:59:06.166639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.660 [2024-10-01 15:59:06.166652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.660 [2024-10-01 15:59:06.166662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.660 [2024-10-01 15:59:06.166916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.660 [2024-10-01 15:59:06.166927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.660 [2024-10-01 15:59:06.166934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.660 [2024-10-01 15:59:06.166944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.660 [2024-10-01 15:59:06.166951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.660 [2024-10-01 15:59:06.166958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.660 [2024-10-01 15:59:06.167248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.660 [2024-10-01 15:59:06.167259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.660 [2024-10-01 15:59:06.177389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.660 [2024-10-01 15:59:06.177413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.660 [2024-10-01 15:59:06.177656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-10-01 15:59:06.177672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.660 [2024-10-01 15:59:06.177681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.660 [2024-10-01 15:59:06.177812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-10-01 15:59:06.177823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.660 [2024-10-01 15:59:06.177831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.660 [2024-10-01 15:59:06.177980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.660 [2024-10-01 15:59:06.177995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.660 [2024-10-01 15:59:06.178021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.660 [2024-10-01 15:59:06.178029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.660 [2024-10-01 15:59:06.178037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.660 [2024-10-01 15:59:06.178047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.660 [2024-10-01 15:59:06.178053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.660 [2024-10-01 15:59:06.178063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.660 [2024-10-01 15:59:06.178078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.660 [2024-10-01 15:59:06.178085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.660 [2024-10-01 15:59:06.188472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.660 [2024-10-01 15:59:06.188494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.660 [2024-10-01 15:59:06.188804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-10-01 15:59:06.188822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.660 [2024-10-01 15:59:06.188830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.660 [2024-10-01 15:59:06.188918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-10-01 15:59:06.188930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.660 [2024-10-01 15:59:06.188938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.660 [2024-10-01 15:59:06.189082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.660 [2024-10-01 15:59:06.189096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.660 [2024-10-01 15:59:06.189245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.660 [2024-10-01 15:59:06.189256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.660 [2024-10-01 15:59:06.189264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.660 [2024-10-01 15:59:06.189273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.660 [2024-10-01 15:59:06.189279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.660 [2024-10-01 15:59:06.189286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.660 [2024-10-01 15:59:06.189315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.660 [2024-10-01 15:59:06.189325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.660 [2024-10-01 15:59:06.199240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.660 [2024-10-01 15:59:06.199263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.660 [2024-10-01 15:59:06.199400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-10-01 15:59:06.199415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.660 [2024-10-01 15:59:06.199422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.660 [2024-10-01 15:59:06.199516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-10-01 15:59:06.199527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.660 [2024-10-01 15:59:06.199536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.660 [2024-10-01 15:59:06.199665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.660 [2024-10-01 15:59:06.199682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.660 [2024-10-01 15:59:06.199821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.660 [2024-10-01 15:59:06.199832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.660 [2024-10-01 15:59:06.199839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.660 [2024-10-01 15:59:06.199849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.660 [2024-10-01 15:59:06.199855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.660 [2024-10-01 15:59:06.199861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.660 [2024-10-01 15:59:06.199899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.660 [2024-10-01 15:59:06.199908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.660 [2024-10-01 15:59:06.210263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.660 [2024-10-01 15:59:06.210287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.660 [2024-10-01 15:59:06.210605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-10-01 15:59:06.210623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.660 [2024-10-01 15:59:06.210631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.660 [2024-10-01 15:59:06.210723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-10-01 15:59:06.210734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.660 [2024-10-01 15:59:06.210741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.660 [2024-10-01 15:59:06.210771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.660 [2024-10-01 15:59:06.210782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.660 [2024-10-01 15:59:06.210793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.660 [2024-10-01 15:59:06.210800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.660 [2024-10-01 15:59:06.210808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.660 [2024-10-01 15:59:06.210817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.660 [2024-10-01 15:59:06.210824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.660 [2024-10-01 15:59:06.210831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.660 [2024-10-01 15:59:06.210844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.660 [2024-10-01 15:59:06.210851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.660 [2024-10-01 15:59:06.221330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.660 [2024-10-01 15:59:06.221353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.660 [2024-10-01 15:59:06.221608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-10-01 15:59:06.221633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.660 [2024-10-01 15:59:06.221646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.660 [2024-10-01 15:59:06.221729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-10-01 15:59:06.221740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.661 [2024-10-01 15:59:06.221747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.661 [2024-10-01 15:59:06.221898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.661 [2024-10-01 15:59:06.221912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.661 [2024-10-01 15:59:06.222059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.661 [2024-10-01 15:59:06.222069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.661 [2024-10-01 15:59:06.222076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.661 [2024-10-01 15:59:06.222085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.661 [2024-10-01 15:59:06.222091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.661 [2024-10-01 15:59:06.222098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.661 [2024-10-01 15:59:06.222128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.661 [2024-10-01 15:59:06.222137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.661 [2024-10-01 15:59:06.232053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.661 [2024-10-01 15:59:06.232075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.661 [2024-10-01 15:59:06.232299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-10-01 15:59:06.232315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.661 [2024-10-01 15:59:06.232323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.661 [2024-10-01 15:59:06.232415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-10-01 15:59:06.232427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.661 [2024-10-01 15:59:06.232434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.661 [2024-10-01 15:59:06.232607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.661 [2024-10-01 15:59:06.232621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.661 [2024-10-01 15:59:06.232651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.661 [2024-10-01 15:59:06.232659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.661 [2024-10-01 15:59:06.232665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.661 [2024-10-01 15:59:06.232675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.661 [2024-10-01 15:59:06.232682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.661 [2024-10-01 15:59:06.232688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.661 [2024-10-01 15:59:06.232708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.661 [2024-10-01 15:59:06.232715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.661 [2024-10-01 15:59:06.242485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.661 [2024-10-01 15:59:06.242507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.661 [2024-10-01 15:59:06.242693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-10-01 15:59:06.242708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.661 [2024-10-01 15:59:06.242715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.661 [2024-10-01 15:59:06.242802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-10-01 15:59:06.242813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.661 [2024-10-01 15:59:06.242820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.661 [2024-10-01 15:59:06.242956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.661 [2024-10-01 15:59:06.242970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.661 [2024-10-01 15:59:06.243112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.661 [2024-10-01 15:59:06.243123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.661 [2024-10-01 15:59:06.243130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.661 [2024-10-01 15:59:06.243140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.661 [2024-10-01 15:59:06.243147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.661 [2024-10-01 15:59:06.243154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.661 [2024-10-01 15:59:06.243183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.661 [2024-10-01 15:59:06.243191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.661 [2024-10-01 15:59:06.253896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.661 [2024-10-01 15:59:06.253918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.661 [2024-10-01 15:59:06.254032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-10-01 15:59:06.254046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.661 [2024-10-01 15:59:06.254054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.661 [2024-10-01 15:59:06.254137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-10-01 15:59:06.254147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.661 [2024-10-01 15:59:06.254154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.661 [2024-10-01 15:59:06.254166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.661 [2024-10-01 15:59:06.254176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.661 [2024-10-01 15:59:06.254190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.661 [2024-10-01 15:59:06.254197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.661 [2024-10-01 15:59:06.254204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.661 [2024-10-01 15:59:06.254213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.661 [2024-10-01 15:59:06.254218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.661 [2024-10-01 15:59:06.254225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.661 [2024-10-01 15:59:06.254238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.661 [2024-10-01 15:59:06.254246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.661 [2024-10-01 15:59:06.264912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.661 [2024-10-01 15:59:06.264935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.661 [2024-10-01 15:59:06.265157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-10-01 15:59:06.265172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.661 [2024-10-01 15:59:06.265180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.661 [2024-10-01 15:59:06.265280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-10-01 15:59:06.265291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.661 [2024-10-01 15:59:06.265298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.661 [2024-10-01 15:59:06.265635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.661 [2024-10-01 15:59:06.265651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.661 [2024-10-01 15:59:06.265804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.661 [2024-10-01 15:59:06.265817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.661 [2024-10-01 15:59:06.265824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.661 [2024-10-01 15:59:06.265834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.661 [2024-10-01 15:59:06.265841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.661 [2024-10-01 15:59:06.265848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.661 [2024-10-01 15:59:06.265996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.661 [2024-10-01 15:59:06.266007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.661 [2024-10-01 15:59:06.275286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.661 [2024-10-01 15:59:06.275308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.661 [2024-10-01 15:59:06.275551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-10-01 15:59:06.275567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.661 [2024-10-01 15:59:06.275575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.661 [2024-10-01 15:59:06.275740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-10-01 15:59:06.275753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.661 [2024-10-01 15:59:06.275760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.661 [2024-10-01 15:59:06.275913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.661 [2024-10-01 15:59:06.275928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.661 [2024-10-01 15:59:06.275956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.661 [2024-10-01 15:59:06.275965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.661 [2024-10-01 15:59:06.275972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.661 [2024-10-01 15:59:06.275981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.661 [2024-10-01 15:59:06.275987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.661 [2024-10-01 15:59:06.275994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.661 [2024-10-01 15:59:06.276121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.661 [2024-10-01 15:59:06.276132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.662 [2024-10-01 15:59:06.285514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.662 [2024-10-01 15:59:06.285535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.662 [2024-10-01 15:59:06.285721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-10-01 15:59:06.285734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.662 [2024-10-01 15:59:06.285743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.662 [2024-10-01 15:59:06.285820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-10-01 15:59:06.285830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.662 [2024-10-01 15:59:06.285837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.662 [2024-10-01 15:59:06.285980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.662 [2024-10-01 15:59:06.285994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.662 [2024-10-01 15:59:06.286134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.662 [2024-10-01 15:59:06.286144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.662 [2024-10-01 15:59:06.286151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.662 [2024-10-01 15:59:06.286161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.662 [2024-10-01 15:59:06.286168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.662 [2024-10-01 15:59:06.286175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.662 [2024-10-01 15:59:06.286201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.662 [2024-10-01 15:59:06.286212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.662 [2024-10-01 15:59:06.296348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.662 [2024-10-01 15:59:06.296370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.662 [2024-10-01 15:59:06.296476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-10-01 15:59:06.296490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.662 [2024-10-01 15:59:06.296498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.662 [2024-10-01 15:59:06.296619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-10-01 15:59:06.296631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.662 [2024-10-01 15:59:06.296638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.662 [2024-10-01 15:59:06.296767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.662 [2024-10-01 15:59:06.296780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.662 [2024-10-01 15:59:06.296807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.662 [2024-10-01 15:59:06.296815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.662 [2024-10-01 15:59:06.296822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.662 [2024-10-01 15:59:06.296832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.662 [2024-10-01 15:59:06.296839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.662 [2024-10-01 15:59:06.296846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.662 [2024-10-01 15:59:06.296980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.662 [2024-10-01 15:59:06.296991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.662 [2024-10-01 15:59:06.306947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.662 [2024-10-01 15:59:06.306969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.662 [2024-10-01 15:59:06.307199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-10-01 15:59:06.307214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.662 [2024-10-01 15:59:06.307221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.662 [2024-10-01 15:59:06.307363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-10-01 15:59:06.307373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.662 [2024-10-01 15:59:06.307380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.662 [2024-10-01 15:59:06.307511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.662 [2024-10-01 15:59:06.307524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.662 [2024-10-01 15:59:06.307662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.662 [2024-10-01 15:59:06.307678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.662 [2024-10-01 15:59:06.307685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.662 [2024-10-01 15:59:06.307694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.662 [2024-10-01 15:59:06.307701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.662 [2024-10-01 15:59:06.307707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.662 [2024-10-01 15:59:06.307852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.662 [2024-10-01 15:59:06.307919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.662 [2024-10-01 15:59:06.318307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.662 [2024-10-01 15:59:06.318330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.662 [2024-10-01 15:59:06.318466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-10-01 15:59:06.318480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.662 [2024-10-01 15:59:06.318488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.662 [2024-10-01 15:59:06.318589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-10-01 15:59:06.318600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.662 [2024-10-01 15:59:06.318607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.662 [2024-10-01 15:59:06.318627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.662 [2024-10-01 15:59:06.318638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.662 [2024-10-01 15:59:06.318648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.662 [2024-10-01 15:59:06.318655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.662 [2024-10-01 15:59:06.318662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.662 [2024-10-01 15:59:06.318671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.662 [2024-10-01 15:59:06.318677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.662 [2024-10-01 15:59:06.318683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.662 [2024-10-01 15:59:06.318697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.662 [2024-10-01 15:59:06.318705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.662 [2024-10-01 15:59:06.329260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.662 [2024-10-01 15:59:06.329282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.662 [2024-10-01 15:59:06.329493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-10-01 15:59:06.329507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.662 [2024-10-01 15:59:06.329515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.662 [2024-10-01 15:59:06.329638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-10-01 15:59:06.329653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.662 [2024-10-01 15:59:06.329661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.662 [2024-10-01 15:59:06.329673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.662 [2024-10-01 15:59:06.329683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.662 [2024-10-01 15:59:06.329693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.662 [2024-10-01 15:59:06.329700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.662 [2024-10-01 15:59:06.329706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.662 [2024-10-01 15:59:06.329715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.662 [2024-10-01 15:59:06.329721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.662 [2024-10-01 15:59:06.329728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.662 [2024-10-01 15:59:06.329742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.662 [2024-10-01 15:59:06.329749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.662 [2024-10-01 15:59:06.340346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.662 [2024-10-01 15:59:06.340368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.662 [2024-10-01 15:59:06.340539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-10-01 15:59:06.340554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.662 [2024-10-01 15:59:06.340561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.662 [2024-10-01 15:59:06.340702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-10-01 15:59:06.340713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.662 [2024-10-01 15:59:06.340720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.662 [2024-10-01 15:59:06.340732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.662 [2024-10-01 15:59:06.340742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.662 [2024-10-01 15:59:06.340752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.662 [2024-10-01 15:59:06.340759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.662 [2024-10-01 15:59:06.340766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.662 [2024-10-01 15:59:06.340775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.662 [2024-10-01 15:59:06.340781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.663 [2024-10-01 15:59:06.340787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.663 [2024-10-01 15:59:06.340801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.663 [2024-10-01 15:59:06.340808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.663 [2024-10-01 15:59:06.351435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.663 [2024-10-01 15:59:06.351458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.663 [2024-10-01 15:59:06.351707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-10-01 15:59:06.351722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.663 [2024-10-01 15:59:06.351730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.663 [2024-10-01 15:59:06.351880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-10-01 15:59:06.351892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.663 [2024-10-01 15:59:06.351899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.663 [2024-10-01 15:59:06.352061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.663 [2024-10-01 15:59:06.352075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.663 [2024-10-01 15:59:06.352216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.663 [2024-10-01 15:59:06.352228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.663 [2024-10-01 15:59:06.352235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.663 [2024-10-01 15:59:06.352245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.663 [2024-10-01 15:59:06.352252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.663 [2024-10-01 15:59:06.352258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.663 [2024-10-01 15:59:06.352400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.663 [2024-10-01 15:59:06.352412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.663 [2024-10-01 15:59:06.361516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.663 [2024-10-01 15:59:06.361546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.663 [2024-10-01 15:59:06.361689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-10-01 15:59:06.361702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.663 [2024-10-01 15:59:06.361710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.663 [2024-10-01 15:59:06.361855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-10-01 15:59:06.361871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.663 [2024-10-01 15:59:06.361878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.663 [2024-10-01 15:59:06.361887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.663 [2024-10-01 15:59:06.361899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.663 [2024-10-01 15:59:06.361907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.663 [2024-10-01 15:59:06.361914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.663 [2024-10-01 15:59:06.361924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.663 [2024-10-01 15:59:06.361937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.663 [2024-10-01 15:59:06.361944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.663 [2024-10-01 15:59:06.361949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.663 [2024-10-01 15:59:06.361956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.663 [2024-10-01 15:59:06.361969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.663 [2024-10-01 15:59:06.373613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.663 [2024-10-01 15:59:06.373635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.663 [2024-10-01 15:59:06.373797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-10-01 15:59:06.373810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.663 [2024-10-01 15:59:06.373817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.663 [2024-10-01 15:59:06.374012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-10-01 15:59:06.374024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.663 [2024-10-01 15:59:06.374031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.663 [2024-10-01 15:59:06.374043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.663 [2024-10-01 15:59:06.374052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.663 [2024-10-01 15:59:06.374070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.663 [2024-10-01 15:59:06.374079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.663 [2024-10-01 15:59:06.374086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.663 [2024-10-01 15:59:06.374095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.663 [2024-10-01 15:59:06.374101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.663 [2024-10-01 15:59:06.374108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.663 [2024-10-01 15:59:06.374121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.663 [2024-10-01 15:59:06.374128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.663 [2024-10-01 15:59:06.385052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.663 [2024-10-01 15:59:06.385074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.663 [2024-10-01 15:59:06.385265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-10-01 15:59:06.385279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.663 [2024-10-01 15:59:06.385288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.663 [2024-10-01 15:59:06.385427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-10-01 15:59:06.385438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.663 [2024-10-01 15:59:06.385449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.663 [2024-10-01 15:59:06.385461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.663 [2024-10-01 15:59:06.385471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.663 [2024-10-01 15:59:06.385481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.663 [2024-10-01 15:59:06.385488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.663 [2024-10-01 15:59:06.385494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.663 [2024-10-01 15:59:06.385503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.663 [2024-10-01 15:59:06.385509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.663 [2024-10-01 15:59:06.385516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.663 [2024-10-01 15:59:06.385530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.663 [2024-10-01 15:59:06.385537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.663 [2024-10-01 15:59:06.396383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.663 [2024-10-01 15:59:06.396405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.663 [2024-10-01 15:59:06.396974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-10-01 15:59:06.396995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.663 [2024-10-01 15:59:06.397004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.663 [2024-10-01 15:59:06.397227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-10-01 15:59:06.397239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.663 [2024-10-01 15:59:06.397247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.663 [2024-10-01 15:59:06.397408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.663 [2024-10-01 15:59:06.397423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.663 [2024-10-01 15:59:06.397460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.663 [2024-10-01 15:59:06.397469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.663 [2024-10-01 15:59:06.397476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.663 [2024-10-01 15:59:06.397485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.663 [2024-10-01 15:59:06.397492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.663 [2024-10-01 15:59:06.397498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.663 [2024-10-01 15:59:06.397512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.663 [2024-10-01 15:59:06.397519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.663 [2024-10-01 15:59:06.407142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.663 [2024-10-01 15:59:06.407163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.663 [2024-10-01 15:59:06.407325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-10-01 15:59:06.407339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.663 [2024-10-01 15:59:06.407346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.663 [2024-10-01 15:59:06.407429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-10-01 15:59:06.407440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.663 [2024-10-01 15:59:06.407447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.663 [2024-10-01 15:59:06.407793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.663 [2024-10-01 15:59:06.407807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.663 [2024-10-01 15:59:06.407972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.663 [2024-10-01 15:59:06.407984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.663 [2024-10-01 15:59:06.407992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.663 [2024-10-01 15:59:06.408001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.664 [2024-10-01 15:59:06.408008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.664 [2024-10-01 15:59:06.408014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.664 [2024-10-01 15:59:06.408045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.664 [2024-10-01 15:59:06.408053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.664 [2024-10-01 15:59:06.417918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.664 [2024-10-01 15:59:06.417939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.664 [2024-10-01 15:59:06.418101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-10-01 15:59:06.418114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.664 [2024-10-01 15:59:06.418123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.664 [2024-10-01 15:59:06.418317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-10-01 15:59:06.418328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.664 [2024-10-01 15:59:06.418335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.664 [2024-10-01 15:59:06.418348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.664 [2024-10-01 15:59:06.418357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.664 [2024-10-01 15:59:06.418367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.664 [2024-10-01 15:59:06.418374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.664 [2024-10-01 15:59:06.418381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.664 [2024-10-01 15:59:06.418391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.664 [2024-10-01 15:59:06.418400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.664 [2024-10-01 15:59:06.418406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.664 [2024-10-01 15:59:06.418420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.664 [2024-10-01 15:59:06.418426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.664 [2024-10-01 15:59:06.429808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.664 [2024-10-01 15:59:06.429829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.664 [2024-10-01 15:59:06.430001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-10-01 15:59:06.430015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.664 [2024-10-01 15:59:06.430022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.664 [2024-10-01 15:59:06.430157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-10-01 15:59:06.430168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.664 [2024-10-01 15:59:06.430176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.664 [2024-10-01 15:59:06.430188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.664 [2024-10-01 15:59:06.430197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.664 [2024-10-01 15:59:06.430207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.664 [2024-10-01 15:59:06.430213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.664 [2024-10-01 15:59:06.430220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.664 [2024-10-01 15:59:06.430229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.664 [2024-10-01 15:59:06.430236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.664 [2024-10-01 15:59:06.430243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.664 [2024-10-01 15:59:06.430256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.664 [2024-10-01 15:59:06.430263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.664 [2024-10-01 15:59:06.442554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.664 [2024-10-01 15:59:06.442576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.664 [2024-10-01 15:59:06.442826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-10-01 15:59:06.442840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.664 [2024-10-01 15:59:06.442848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.664 [2024-10-01 15:59:06.443098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-10-01 15:59:06.443111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.664 [2024-10-01 15:59:06.443118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.664 [2024-10-01 15:59:06.443508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.664 [2024-10-01 15:59:06.443523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.664 [2024-10-01 15:59:06.443681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.664 [2024-10-01 15:59:06.443694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.664 [2024-10-01 15:59:06.443701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.664 [2024-10-01 15:59:06.443710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.664 [2024-10-01 15:59:06.443717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.664 [2024-10-01 15:59:06.443724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.664 [2024-10-01 15:59:06.443876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.664 [2024-10-01 15:59:06.443887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.664 [2024-10-01 15:59:06.453646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.664 [2024-10-01 15:59:06.453667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.664 [2024-10-01 15:59:06.453860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-10-01 15:59:06.453878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.664 [2024-10-01 15:59:06.453887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.664 [2024-10-01 15:59:06.453979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-10-01 15:59:06.453990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.664 [2024-10-01 15:59:06.453997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.664 [2024-10-01 15:59:06.454009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.664 [2024-10-01 15:59:06.454018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.664 [2024-10-01 15:59:06.454028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.664 [2024-10-01 15:59:06.454035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.664 [2024-10-01 15:59:06.454042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.664 [2024-10-01 15:59:06.454052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.664 [2024-10-01 15:59:06.454060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.664 [2024-10-01 15:59:06.454067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.664 [2024-10-01 15:59:06.454081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.664 [2024-10-01 15:59:06.454087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.664 [2024-10-01 15:59:06.466626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.664 [2024-10-01 15:59:06.466648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.664 [2024-10-01 15:59:06.467177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-10-01 15:59:06.467199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.664 [2024-10-01 15:59:06.467207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.664 [2024-10-01 15:59:06.467415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-10-01 15:59:06.467427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.664 [2024-10-01 15:59:06.467435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.664 [2024-10-01 15:59:06.467698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.664 [2024-10-01 15:59:06.467714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.664 [2024-10-01 15:59:06.467879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.664 [2024-10-01 15:59:06.467892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.664 [2024-10-01 15:59:06.467899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.664 [2024-10-01 15:59:06.467909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.664 [2024-10-01 15:59:06.467916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.664 [2024-10-01 15:59:06.467922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.664 [2024-10-01 15:59:06.467953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.664 [2024-10-01 15:59:06.467960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.664 [2024-10-01 15:59:06.477701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.664 [2024-10-01 15:59:06.477724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.664 [2024-10-01 15:59:06.477956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-10-01 15:59:06.477971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.665 [2024-10-01 15:59:06.477979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.665 [2024-10-01 15:59:06.478154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-10-01 15:59:06.478165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.665 [2024-10-01 15:59:06.478173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.665 [2024-10-01 15:59:06.478184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.665 [2024-10-01 15:59:06.478195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.665 [2024-10-01 15:59:06.478205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.665 [2024-10-01 15:59:06.478211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.665 [2024-10-01 15:59:06.478218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.665 [2024-10-01 15:59:06.478228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.665 [2024-10-01 15:59:06.478235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.665 [2024-10-01 15:59:06.478245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.665 [2024-10-01 15:59:06.478259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.665 [2024-10-01 15:59:06.478265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.665 [2024-10-01 15:59:06.488513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.665 [2024-10-01 15:59:06.488536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.665 [2024-10-01 15:59:06.488725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-10-01 15:59:06.488740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.665 [2024-10-01 15:59:06.488749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.665 [2024-10-01 15:59:06.488895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-10-01 15:59:06.488906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.665 [2024-10-01 15:59:06.488914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.665 [2024-10-01 15:59:06.489045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.665 [2024-10-01 15:59:06.489058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.665 [2024-10-01 15:59:06.489199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.665 [2024-10-01 15:59:06.489209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.665 [2024-10-01 15:59:06.489216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.665 [2024-10-01 15:59:06.489226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.665 [2024-10-01 15:59:06.489233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.665 [2024-10-01 15:59:06.489240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.665 [2024-10-01 15:59:06.489270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.665 [2024-10-01 15:59:06.489279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.665 [2024-10-01 15:59:06.500715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.665 [2024-10-01 15:59:06.500738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.665 [2024-10-01 15:59:06.500847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-10-01 15:59:06.500867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.665 [2024-10-01 15:59:06.500875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.665 [2024-10-01 15:59:06.501009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-10-01 15:59:06.501020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.665 [2024-10-01 15:59:06.501027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.665 [2024-10-01 15:59:06.501040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.665 [2024-10-01 15:59:06.501054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.665 [2024-10-01 15:59:06.501064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.665 [2024-10-01 15:59:06.501071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.665 [2024-10-01 15:59:06.501077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.665 [2024-10-01 15:59:06.501086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.665 [2024-10-01 15:59:06.501092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.665 [2024-10-01 15:59:06.501099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.665 [2024-10-01 15:59:06.501113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.665 [2024-10-01 15:59:06.501121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.665 [2024-10-01 15:59:06.510998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.665 [2024-10-01 15:59:06.511020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.665 [2024-10-01 15:59:06.511182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-10-01 15:59:06.511196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.665 [2024-10-01 15:59:06.511203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.665 [2024-10-01 15:59:06.511342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-10-01 15:59:06.511353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.665 [2024-10-01 15:59:06.511361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.665 [2024-10-01 15:59:06.511373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.665 [2024-10-01 15:59:06.511383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.665 [2024-10-01 15:59:06.511394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.665 [2024-10-01 15:59:06.511400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.665 [2024-10-01 15:59:06.511407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.665 [2024-10-01 15:59:06.511416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.665 [2024-10-01 15:59:06.511423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.665 [2024-10-01 15:59:06.511430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.665 [2024-10-01 15:59:06.511443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.665 [2024-10-01 15:59:06.511450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.665 [2024-10-01 15:59:06.522800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.665 [2024-10-01 15:59:06.522822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.665 [2024-10-01 15:59:06.523201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-10-01 15:59:06.523220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.665 [2024-10-01 15:59:06.523234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.665 [2024-10-01 15:59:06.523320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-10-01 15:59:06.523331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.665 [2024-10-01 15:59:06.523338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.665 [2024-10-01 15:59:06.523483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.665 [2024-10-01 15:59:06.523497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.665 [2024-10-01 15:59:06.523533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.665 [2024-10-01 15:59:06.523542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.665 [2024-10-01 15:59:06.523549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.665 [2024-10-01 15:59:06.523559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.665 [2024-10-01 15:59:06.523565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.665 [2024-10-01 15:59:06.523571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.665 [2024-10-01 15:59:06.523748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.665 [2024-10-01 15:59:06.523759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.665 [2024-10-01 15:59:06.534610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.665 [2024-10-01 15:59:06.534632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.665 [2024-10-01 15:59:06.534987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-10-01 15:59:06.535007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.665 [2024-10-01 15:59:06.535015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.665 [2024-10-01 15:59:06.535237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-10-01 15:59:06.535250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.665 [2024-10-01 15:59:06.535257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.665 [2024-10-01 15:59:06.535569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.665 [2024-10-01 15:59:06.535586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.665 [2024-10-01 15:59:06.535738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.665 [2024-10-01 15:59:06.535749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.665 [2024-10-01 15:59:06.535757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.665 [2024-10-01 15:59:06.535766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.665 [2024-10-01 15:59:06.535773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.665 [2024-10-01 15:59:06.535779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.665 [2024-10-01 15:59:06.535814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.665 [2024-10-01 15:59:06.535822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.665 [2024-10-01 15:59:06.545849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.665 [2024-10-01 15:59:06.545879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.665 [2024-10-01 15:59:06.546059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-10-01 15:59:06.546073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.665 [2024-10-01 15:59:06.546081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.665 [2024-10-01 15:59:06.546301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-10-01 15:59:06.546312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.665 [2024-10-01 15:59:06.546319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.665 [2024-10-01 15:59:06.546558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.665 [2024-10-01 15:59:06.546574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.665 [2024-10-01 15:59:06.546620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.666 [2024-10-01 15:59:06.546630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.666 [2024-10-01 15:59:06.546637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.666 [2024-10-01 15:59:06.546646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.666 [2024-10-01 15:59:06.546653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.666 [2024-10-01 15:59:06.546660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.666 [2024-10-01 15:59:06.546674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.666 [2024-10-01 15:59:06.546681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.666 [2024-10-01 15:59:06.556528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.666 [2024-10-01 15:59:06.556549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.666 [2024-10-01 15:59:06.556759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-10-01 15:59:06.556773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.666 [2024-10-01 15:59:06.556781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.666 [2024-10-01 15:59:06.556998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-10-01 15:59:06.557011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.666 [2024-10-01 15:59:06.557020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.666 [2024-10-01 15:59:06.557261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.666 [2024-10-01 15:59:06.557276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.666 [2024-10-01 15:59:06.557430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.666 [2024-10-01 15:59:06.557441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.666 [2024-10-01 15:59:06.557449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.666 [2024-10-01 15:59:06.557458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.666 [2024-10-01 15:59:06.557465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.666 [2024-10-01 15:59:06.557472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.666 [2024-10-01 15:59:06.557502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.666 [2024-10-01 15:59:06.557509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.666 [2024-10-01 15:59:06.567522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.666 [2024-10-01 15:59:06.567545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.666 [2024-10-01 15:59:06.567790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-10-01 15:59:06.567804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.666 [2024-10-01 15:59:06.567813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.666 [2024-10-01 15:59:06.567898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-10-01 15:59:06.567910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.666 [2024-10-01 15:59:06.567918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.666 [2024-10-01 15:59:06.568048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.666 [2024-10-01 15:59:06.568062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.666 [2024-10-01 15:59:06.568201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.666 [2024-10-01 15:59:06.568212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.666 [2024-10-01 15:59:06.568220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.666 [2024-10-01 15:59:06.568230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.666 [2024-10-01 15:59:06.568236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.666 [2024-10-01 15:59:06.568243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.666 [2024-10-01 15:59:06.568272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.666 [2024-10-01 15:59:06.568282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.666 [2024-10-01 15:59:06.578849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.666 [2024-10-01 15:59:06.578878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.666 [2024-10-01 15:59:06.579284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-10-01 15:59:06.579301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.666 [2024-10-01 15:59:06.579309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.666 [2024-10-01 15:59:06.579407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-10-01 15:59:06.579418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.666 [2024-10-01 15:59:06.579425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.666 [2024-10-01 15:59:06.579582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.666 [2024-10-01 15:59:06.579596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.666 [2024-10-01 15:59:06.579735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.666 [2024-10-01 15:59:06.579746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.666 [2024-10-01 15:59:06.579753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.666 [2024-10-01 15:59:06.579762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.666 [2024-10-01 15:59:06.579769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.666 [2024-10-01 15:59:06.579776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.666 [2024-10-01 15:59:06.579806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.666 [2024-10-01 15:59:06.579814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.666 [2024-10-01 15:59:06.590360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.666 [2024-10-01 15:59:06.590382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.666 [2024-10-01 15:59:06.590713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-10-01 15:59:06.590731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.666 [2024-10-01 15:59:06.590739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.666 [2024-10-01 15:59:06.590885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-10-01 15:59:06.590897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.666 [2024-10-01 15:59:06.590905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.666 [2024-10-01 15:59:06.591081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.666 [2024-10-01 15:59:06.591095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.666 [2024-10-01 15:59:06.591236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.666 [2024-10-01 15:59:06.591248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.666 [2024-10-01 15:59:06.591255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.666 [2024-10-01 15:59:06.591264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.666 [2024-10-01 15:59:06.591271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.666 [2024-10-01 15:59:06.591278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.666 [2024-10-01 15:59:06.591309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.666 [2024-10-01 15:59:06.591320] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.666 [2024-10-01 15:59:06.601875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.666 [2024-10-01 15:59:06.601897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.666 [2024-10-01 15:59:06.602273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-10-01 15:59:06.602290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.666 [2024-10-01 15:59:06.602298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.666 [2024-10-01 15:59:06.602490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-10-01 15:59:06.602502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.666 [2024-10-01 15:59:06.602509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.666 [2024-10-01 15:59:06.602802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.666 [2024-10-01 15:59:06.602818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.666 [2024-10-01 15:59:06.602859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.666 [2024-10-01 15:59:06.602873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.666 [2024-10-01 15:59:06.602880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.666 [2024-10-01 15:59:06.602890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.666 [2024-10-01 15:59:06.602896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.666 [2024-10-01 15:59:06.602903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.666 [2024-10-01 15:59:06.603033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.666 [2024-10-01 15:59:06.603043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.666 [2024-10-01 15:59:06.613397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.666 [2024-10-01 15:59:06.613419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.666 [2024-10-01 15:59:06.613769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-10-01 15:59:06.613787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.666 [2024-10-01 15:59:06.613795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.666 [2024-10-01 15:59:06.613989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-10-01 15:59:06.614000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.666 [2024-10-01 15:59:06.614008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.666 [2024-10-01 15:59:06.614295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.666 [2024-10-01 15:59:06.614310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.666 [2024-10-01 15:59:06.614462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.666 [2024-10-01 15:59:06.614478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.666 [2024-10-01 15:59:06.614486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.666 [2024-10-01 15:59:06.614496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.666 [2024-10-01 15:59:06.614503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.666 [2024-10-01 15:59:06.614509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.666 [2024-10-01 15:59:06.614541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.666 [2024-10-01 15:59:06.614549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.667 [2024-10-01 15:59:06.624834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.667 [2024-10-01 15:59:06.624856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.667 [2024-10-01 15:59:06.625228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-10-01 15:59:06.625247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.667 [2024-10-01 15:59:06.625255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.667 [2024-10-01 15:59:06.625396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-10-01 15:59:06.625407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.667 [2024-10-01 15:59:06.625415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.667 [2024-10-01 15:59:06.625559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.667 [2024-10-01 15:59:06.625573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.667 [2024-10-01 15:59:06.625712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.667 [2024-10-01 15:59:06.625722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.667 [2024-10-01 15:59:06.625730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.667 [2024-10-01 15:59:06.625741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.667 [2024-10-01 15:59:06.625747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.667 [2024-10-01 15:59:06.625754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.667 [2024-10-01 15:59:06.625784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.667 [2024-10-01 15:59:06.625793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.667 [2024-10-01 15:59:06.636029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.667 [2024-10-01 15:59:06.636051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.667 [2024-10-01 15:59:06.636212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-10-01 15:59:06.636226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.667 [2024-10-01 15:59:06.636234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.667 [2024-10-01 15:59:06.636429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-10-01 15:59:06.636444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.667 [2024-10-01 15:59:06.636451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.667 [2024-10-01 15:59:06.636464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.667 [2024-10-01 15:59:06.636473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.667 [2024-10-01 15:59:06.636483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.667 [2024-10-01 15:59:06.636489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.667 [2024-10-01 15:59:06.636496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.667 [2024-10-01 15:59:06.636505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.667 [2024-10-01 15:59:06.636512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.667 [2024-10-01 15:59:06.636518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.667 [2024-10-01 15:59:06.636532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.667 [2024-10-01 15:59:06.636539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.667 [2024-10-01 15:59:06.648703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.667 [2024-10-01 15:59:06.648725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.667 [2024-10-01 15:59:06.648962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-10-01 15:59:06.648977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.667 [2024-10-01 15:59:06.648984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.667 [2024-10-01 15:59:06.649216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-10-01 15:59:06.649229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.667 [2024-10-01 15:59:06.649236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.667 [2024-10-01 15:59:06.649534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.667 [2024-10-01 15:59:06.649549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.667 [2024-10-01 15:59:06.649795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.667 [2024-10-01 15:59:06.649807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.667 [2024-10-01 15:59:06.649815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.667 [2024-10-01 15:59:06.649824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.667 [2024-10-01 15:59:06.649831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.667 [2024-10-01 15:59:06.649837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.667 [2024-10-01 15:59:06.650200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.667 [2024-10-01 15:59:06.650214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.667 [2024-10-01 15:59:06.660552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.667 [2024-10-01 15:59:06.660574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.667 [2024-10-01 15:59:06.660674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-10-01 15:59:06.660686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.667 [2024-10-01 15:59:06.660695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.667 [2024-10-01 15:59:06.660776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-10-01 15:59:06.660786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.667 [2024-10-01 15:59:06.660793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.667 [2024-10-01 15:59:06.661145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.667 [2024-10-01 15:59:06.661160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.667 [2024-10-01 15:59:06.661322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.667 [2024-10-01 15:59:06.661334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.667 [2024-10-01 15:59:06.661341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.667 [2024-10-01 15:59:06.661351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.667 [2024-10-01 15:59:06.661359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.667 [2024-10-01 15:59:06.661365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.667 [2024-10-01 15:59:06.661397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.667 [2024-10-01 15:59:06.661404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.667 [2024-10-01 15:59:06.672074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.667 [2024-10-01 15:59:06.672096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.667 [2024-10-01 15:59:06.672315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-10-01 15:59:06.672329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.667 [2024-10-01 15:59:06.672336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.667 [2024-10-01 15:59:06.672486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-10-01 15:59:06.672497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.667 [2024-10-01 15:59:06.672504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.667 [2024-10-01 15:59:06.672956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.667 [2024-10-01 15:59:06.672972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.667 [2024-10-01 15:59:06.673170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.667 [2024-10-01 15:59:06.673183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.667 [2024-10-01 15:59:06.673194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.667 [2024-10-01 15:59:06.673205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.667 [2024-10-01 15:59:06.673212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.667 [2024-10-01 15:59:06.673219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.667 [2024-10-01 15:59:06.673364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.667 [2024-10-01 15:59:06.673375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.667 [2024-10-01 15:59:06.682709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.667 [2024-10-01 15:59:06.682730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.667 [2024-10-01 15:59:06.682968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-10-01 15:59:06.682983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.667 [2024-10-01 15:59:06.682991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.667 [2024-10-01 15:59:06.683077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-10-01 15:59:06.683088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.667 [2024-10-01 15:59:06.683095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.667 [2024-10-01 15:59:06.683107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.667 [2024-10-01 15:59:06.683117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.667 [2024-10-01 15:59:06.683128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.667 [2024-10-01 15:59:06.683135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.667 [2024-10-01 15:59:06.683142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.667 [2024-10-01 15:59:06.683150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.667 [2024-10-01 15:59:06.683157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.667 [2024-10-01 15:59:06.683163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.667 [2024-10-01 15:59:06.683177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.667 [2024-10-01 15:59:06.683185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.668 [2024-10-01 15:59:06.694872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.668 [2024-10-01 15:59:06.694894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.668 [2024-10-01 15:59:06.695091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-10-01 15:59:06.695105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.668 [2024-10-01 15:59:06.695112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.668 [2024-10-01 15:59:06.695241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-10-01 15:59:06.695252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.668 [2024-10-01 15:59:06.695262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.668 [2024-10-01 15:59:06.695274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.668 [2024-10-01 15:59:06.695283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.668 [2024-10-01 15:59:06.695294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.668 [2024-10-01 15:59:06.695301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.668 [2024-10-01 15:59:06.695308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.668 [2024-10-01 15:59:06.695317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.668 [2024-10-01 15:59:06.695323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.668 [2024-10-01 15:59:06.695329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.668 [2024-10-01 15:59:06.695351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.668 [2024-10-01 15:59:06.695359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.668 [2024-10-01 15:59:06.706124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.668 [2024-10-01 15:59:06.706146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.668 [2024-10-01 15:59:06.706313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-10-01 15:59:06.706327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.668 [2024-10-01 15:59:06.706334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.668 [2024-10-01 15:59:06.706484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-10-01 15:59:06.706495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.668 [2024-10-01 15:59:06.706502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.668 [2024-10-01 15:59:06.706514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.668 [2024-10-01 15:59:06.706523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.668 [2024-10-01 15:59:06.706534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.668 [2024-10-01 15:59:06.706540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.668 [2024-10-01 15:59:06.706548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.668 [2024-10-01 15:59:06.706557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.668 [2024-10-01 15:59:06.706564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.668 [2024-10-01 15:59:06.706570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.668 [2024-10-01 15:59:06.706584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.668 [2024-10-01 15:59:06.706591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.668 [2024-10-01 15:59:06.717366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.668 [2024-10-01 15:59:06.717392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.668 [2024-10-01 15:59:06.717854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-10-01 15:59:06.717880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.668 [2024-10-01 15:59:06.717888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.668 [2024-10-01 15:59:06.718109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-10-01 15:59:06.718121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.668 [2024-10-01 15:59:06.718129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.668 [2024-10-01 15:59:06.718599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.668 [2024-10-01 15:59:06.718616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.668 [2024-10-01 15:59:06.718779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.668 [2024-10-01 15:59:06.718791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.668 [2024-10-01 15:59:06.718798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.668 [2024-10-01 15:59:06.718808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.668 [2024-10-01 15:59:06.718816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.668 [2024-10-01 15:59:06.718822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.668 [2024-10-01 15:59:06.718973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.668 [2024-10-01 15:59:06.718984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.668 [2024-10-01 15:59:06.728558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.668 [2024-10-01 15:59:06.728580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.668 [2024-10-01 15:59:06.728822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-10-01 15:59:06.728836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.668 [2024-10-01 15:59:06.728844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.668 [2024-10-01 15:59:06.729038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-10-01 15:59:06.729049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.668 [2024-10-01 15:59:06.729058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.668 [2024-10-01 15:59:06.729511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.668 [2024-10-01 15:59:06.729527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.668 [2024-10-01 15:59:06.729695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.668 [2024-10-01 15:59:06.729706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.668 [2024-10-01 15:59:06.729714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.668 [2024-10-01 15:59:06.729727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.668 [2024-10-01 15:59:06.729734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.668 [2024-10-01 15:59:06.729741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.668 [2024-10-01 15:59:06.729893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.668 [2024-10-01 15:59:06.729904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.668 [2024-10-01 15:59:06.739290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.668 [2024-10-01 15:59:06.739312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.668 [2024-10-01 15:59:06.739553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-10-01 15:59:06.739567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.668 [2024-10-01 15:59:06.739574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.668 [2024-10-01 15:59:06.739794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-10-01 15:59:06.739806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.668 [2024-10-01 15:59:06.739813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.668 [2024-10-01 15:59:06.739826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.668 [2024-10-01 15:59:06.739836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.668 [2024-10-01 15:59:06.739846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.668 [2024-10-01 15:59:06.739852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.668 [2024-10-01 15:59:06.739859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.668 [2024-10-01 15:59:06.739873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.668 [2024-10-01 15:59:06.739879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.668 [2024-10-01 15:59:06.739885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.668 [2024-10-01 15:59:06.739907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.668 [2024-10-01 15:59:06.739914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.668 [2024-10-01 15:59:06.751972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.668 [2024-10-01 15:59:06.751995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.668 [2024-10-01 15:59:06.752206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-10-01 15:59:06.752220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.668 [2024-10-01 15:59:06.752228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.668 [2024-10-01 15:59:06.752372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-10-01 15:59:06.752383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.668 [2024-10-01 15:59:06.752390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.668 [2024-10-01 15:59:06.752412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.668 [2024-10-01 15:59:06.752422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.668 [2024-10-01 15:59:06.752432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.668 [2024-10-01 15:59:06.752438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.668 [2024-10-01 15:59:06.752445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.668 [2024-10-01 15:59:06.752453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.668 [2024-10-01 15:59:06.752460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.668 [2024-10-01 15:59:06.752466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.668 [2024-10-01 15:59:06.752479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.668 [2024-10-01 15:59:06.752485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.668 [2024-10-01 15:59:06.763441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.668 [2024-10-01 15:59:06.763463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.668 [2024-10-01 15:59:06.763581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-10-01 15:59:06.763595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.668 [2024-10-01 15:59:06.763602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.668 [2024-10-01 15:59:06.763735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-10-01 15:59:06.763746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.668 [2024-10-01 15:59:06.763753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.668 [2024-10-01 15:59:06.763765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.668 [2024-10-01 15:59:06.763774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.668 [2024-10-01 15:59:06.763786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.668 [2024-10-01 15:59:06.763793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.668 [2024-10-01 15:59:06.763800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.668 [2024-10-01 15:59:06.763809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.668 [2024-10-01 15:59:06.763815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.668 [2024-10-01 15:59:06.763821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.669 [2024-10-01 15:59:06.763834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.669 [2024-10-01 15:59:06.763841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.669 [2024-10-01 15:59:06.773821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.669 [2024-10-01 15:59:06.773843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.669 [2024-10-01 15:59:06.774015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-10-01 15:59:06.774029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.669 [2024-10-01 15:59:06.774037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.669 [2024-10-01 15:59:06.774241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-10-01 15:59:06.774252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.669 [2024-10-01 15:59:06.774259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.669 [2024-10-01 15:59:06.774271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.669 [2024-10-01 15:59:06.774280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.669 [2024-10-01 15:59:06.774292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.669 [2024-10-01 15:59:06.774299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.669 [2024-10-01 15:59:06.774306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.669 [2024-10-01 15:59:06.774315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.669 [2024-10-01 15:59:06.774321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.669 [2024-10-01 15:59:06.774328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.669 [2024-10-01 15:59:06.774342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.669 [2024-10-01 15:59:06.774350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.669 [2024-10-01 15:59:06.785913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.669 [2024-10-01 15:59:06.785936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.669 [2024-10-01 15:59:06.786276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-10-01 15:59:06.786295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.669 [2024-10-01 15:59:06.786304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.669 [2024-10-01 15:59:06.786451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-10-01 15:59:06.786462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.669 [2024-10-01 15:59:06.786469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.669 [2024-10-01 15:59:06.786721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.669 [2024-10-01 15:59:06.786736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.669 [2024-10-01 15:59:06.786901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.669 [2024-10-01 15:59:06.786914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.669 [2024-10-01 15:59:06.786921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.669 [2024-10-01 15:59:06.786931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.669 [2024-10-01 15:59:06.786937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.669 [2024-10-01 15:59:06.786948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.669 [2024-10-01 15:59:06.786978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.669 [2024-10-01 15:59:06.786986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.669 [2024-10-01 15:59:06.797017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.669 [2024-10-01 15:59:06.797038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.669 [2024-10-01 15:59:06.797311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-10-01 15:59:06.797327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.669 [2024-10-01 15:59:06.797335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.669 [2024-10-01 15:59:06.797527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-10-01 15:59:06.797538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.669 [2024-10-01 15:59:06.797545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.669 [2024-10-01 15:59:06.797558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.669 [2024-10-01 15:59:06.797568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.669 [2024-10-01 15:59:06.797578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.669 [2024-10-01 15:59:06.797584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.669 [2024-10-01 15:59:06.797591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.669 [2024-10-01 15:59:06.797600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.669 [2024-10-01 15:59:06.797606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.669 [2024-10-01 15:59:06.797613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.669 [2024-10-01 15:59:06.797628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.669 [2024-10-01 15:59:06.797635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.669 [2024-10-01 15:59:06.807341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.669 [2024-10-01 15:59:06.807363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.669 [2024-10-01 15:59:06.807602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-10-01 15:59:06.807616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.669 [2024-10-01 15:59:06.807624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.669 [2024-10-01 15:59:06.807719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-10-01 15:59:06.807729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.669 [2024-10-01 15:59:06.807736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.669 [2024-10-01 15:59:06.808122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.669 [2024-10-01 15:59:06.808146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.669 [2024-10-01 15:59:06.808305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.669 [2024-10-01 15:59:06.808317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.669 [2024-10-01 15:59:06.808324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.669 [2024-10-01 15:59:06.808334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.669 [2024-10-01 15:59:06.808342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.669 [2024-10-01 15:59:06.808348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.669 [2024-10-01 15:59:06.808379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.669 [2024-10-01 15:59:06.808387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.669 [2024-10-01 15:59:06.818484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.669 [2024-10-01 15:59:06.818507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.669 [2024-10-01 15:59:06.818736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-10-01 15:59:06.818750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.669 [2024-10-01 15:59:06.818758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.669 [2024-10-01 15:59:06.818977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-10-01 15:59:06.818990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.669 [2024-10-01 15:59:06.818997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.669 [2024-10-01 15:59:06.819010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.669 [2024-10-01 15:59:06.819020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.669 [2024-10-01 15:59:06.819031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.669 [2024-10-01 15:59:06.819037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.669 [2024-10-01 15:59:06.819044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.669 [2024-10-01 15:59:06.819052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.669 [2024-10-01 15:59:06.819059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.669 [2024-10-01 15:59:06.819066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.669 [2024-10-01 15:59:06.819080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.669 [2024-10-01 15:59:06.819088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.669 [2024-10-01 15:59:06.830482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.669 [2024-10-01 15:59:06.830506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.669 [2024-10-01 15:59:06.830744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-10-01 15:59:06.830758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.669 [2024-10-01 15:59:06.830770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.669 [2024-10-01 15:59:06.830916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-10-01 15:59:06.830927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.669 [2024-10-01 15:59:06.830934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.669 [2024-10-01 15:59:06.830947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.669 [2024-10-01 15:59:06.830956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.669 [2024-10-01 15:59:06.830974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.669 [2024-10-01 15:59:06.830982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.669 [2024-10-01 15:59:06.830989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.669 [2024-10-01 15:59:06.830998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.669 [2024-10-01 15:59:06.831004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.669 [2024-10-01 15:59:06.831010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.669 [2024-10-01 15:59:06.831024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.669 [2024-10-01 15:59:06.831031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.669 [2024-10-01 15:59:06.842527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.669 [2024-10-01 15:59:06.842550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.669 [2024-10-01 15:59:06.842674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-10-01 15:59:06.842688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.669 [2024-10-01 15:59:06.842696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.669 [2024-10-01 15:59:06.842914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-10-01 15:59:06.842926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.669 [2024-10-01 15:59:06.842933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.669 [2024-10-01 15:59:06.843117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.669 [2024-10-01 15:59:06.843131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.670 [2024-10-01 15:59:06.843159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.670 [2024-10-01 15:59:06.843168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.670 [2024-10-01 15:59:06.843175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.670 [2024-10-01 15:59:06.843185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.670 [2024-10-01 15:59:06.843191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.670 [2024-10-01 15:59:06.843201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.670 [2024-10-01 15:59:06.843330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.670 [2024-10-01 15:59:06.843340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.670 [2024-10-01 15:59:06.853977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.670 [2024-10-01 15:59:06.853999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.670 [2024-10-01 15:59:06.854356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-10-01 15:59:06.854374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.670 [2024-10-01 15:59:06.854382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.670 [2024-10-01 15:59:06.854492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-10-01 15:59:06.854504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.670 [2024-10-01 15:59:06.854511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.670 [2024-10-01 15:59:06.854686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.670 [2024-10-01 15:59:06.854700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.670 [2024-10-01 15:59:06.854841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.670 [2024-10-01 15:59:06.854851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.670 [2024-10-01 15:59:06.854859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.670 [2024-10-01 15:59:06.854875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.670 [2024-10-01 15:59:06.854883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.670 [2024-10-01 15:59:06.854890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.670 [2024-10-01 15:59:06.854921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.670 [2024-10-01 15:59:06.854929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.670 [2024-10-01 15:59:06.864667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.670 [2024-10-01 15:59:06.864690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.670 [2024-10-01 15:59:06.864805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-10-01 15:59:06.864818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.670 [2024-10-01 15:59:06.864827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.670 [2024-10-01 15:59:06.865019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-10-01 15:59:06.865030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.670 [2024-10-01 15:59:06.865037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.670 [2024-10-01 15:59:06.865049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.670 [2024-10-01 15:59:06.865060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.670 [2024-10-01 15:59:06.865073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.670 [2024-10-01 15:59:06.865080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.670 [2024-10-01 15:59:06.865087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.670 [2024-10-01 15:59:06.865096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.670 [2024-10-01 15:59:06.865102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.670 [2024-10-01 15:59:06.865109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.670 [2024-10-01 15:59:06.865124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.670 [2024-10-01 15:59:06.865131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.670 [2024-10-01 15:59:06.874749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.670 [2024-10-01 15:59:06.874779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.670 [2024-10-01 15:59:06.874897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-10-01 15:59:06.874911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.670 [2024-10-01 15:59:06.874919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.670 [2024-10-01 15:59:06.875090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-10-01 15:59:06.875101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.670 [2024-10-01 15:59:06.875110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.670 [2024-10-01 15:59:06.875119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.670 [2024-10-01 15:59:06.875131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.670 [2024-10-01 15:59:06.875139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.670 [2024-10-01 15:59:06.875146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.670 [2024-10-01 15:59:06.875153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.670 [2024-10-01 15:59:06.875166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.670 [2024-10-01 15:59:06.875174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.670 [2024-10-01 15:59:06.875180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.670 [2024-10-01 15:59:06.875187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.670 [2024-10-01 15:59:06.875199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.670 [2024-10-01 15:59:06.887287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.670 [2024-10-01 15:59:06.887308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.670 [2024-10-01 15:59:06.887521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-10-01 15:59:06.887534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.670 [2024-10-01 15:59:06.887543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.670 [2024-10-01 15:59:06.887692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-10-01 15:59:06.887703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.670 [2024-10-01 15:59:06.887711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.670 [2024-10-01 15:59:06.887722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.670 [2024-10-01 15:59:06.887731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.670 [2024-10-01 15:59:06.887741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.670 [2024-10-01 15:59:06.887748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.670 [2024-10-01 15:59:06.887756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.670 [2024-10-01 15:59:06.887764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.670 [2024-10-01 15:59:06.887770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.670 [2024-10-01 15:59:06.887777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.670 [2024-10-01 15:59:06.887790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.670 [2024-10-01 15:59:06.887796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.670 [2024-10-01 15:59:06.898731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.670 [2024-10-01 15:59:06.898753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.670 [2024-10-01 15:59:06.898989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-10-01 15:59:06.899004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.670 [2024-10-01 15:59:06.899012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.670 [2024-10-01 15:59:06.899269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-10-01 15:59:06.899282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.670 [2024-10-01 15:59:06.899289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.670 [2024-10-01 15:59:06.899301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.670 [2024-10-01 15:59:06.899311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.670 [2024-10-01 15:59:06.899321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.670 [2024-10-01 15:59:06.899327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.670 [2024-10-01 15:59:06.899335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.670 [2024-10-01 15:59:06.899343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.670 [2024-10-01 15:59:06.899349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.670 [2024-10-01 15:59:06.899356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.670 [2024-10-01 15:59:06.899371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.670 [2024-10-01 15:59:06.899381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.670 [2024-10-01 15:59:06.911318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.670 [2024-10-01 15:59:06.911340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.670 [2024-10-01 15:59:06.911577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-10-01 15:59:06.911592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.670 [2024-10-01 15:59:06.911600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.670 [2024-10-01 15:59:06.911759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-10-01 15:59:06.911770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.670 [2024-10-01 15:59:06.911777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.670 [2024-10-01 15:59:06.912061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.670 [2024-10-01 15:59:06.912077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.670 [2024-10-01 15:59:06.912439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.670 [2024-10-01 15:59:06.912453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.670 [2024-10-01 15:59:06.912460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.670 [2024-10-01 15:59:06.912470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.670 [2024-10-01 15:59:06.912477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.670 [2024-10-01 15:59:06.912484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.670 [2024-10-01 15:59:06.912642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.670 [2024-10-01 15:59:06.912652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.670 [2024-10-01 15:59:06.924483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.670 [2024-10-01 15:59:06.924506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.670 [2024-10-01 15:59:06.924657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-10-01 15:59:06.924671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.670 [2024-10-01 15:59:06.924679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.670 [2024-10-01 15:59:06.924802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-10-01 15:59:06.924812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.670 [2024-10-01 15:59:06.924819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.670 [2024-10-01 15:59:06.924831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.671 [2024-10-01 15:59:06.924841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.671 [2024-10-01 15:59:06.924860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.671 [2024-10-01 15:59:06.924877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.671 [2024-10-01 15:59:06.924884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.671 [2024-10-01 15:59:06.924893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.671 [2024-10-01 15:59:06.924899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.671 [2024-10-01 15:59:06.924905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.671 [2024-10-01 15:59:06.924920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.671 [2024-10-01 15:59:06.924928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.671 [2024-10-01 15:59:06.936781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.671 [2024-10-01 15:59:06.936803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.671 [2024-10-01 15:59:06.936969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-10-01 15:59:06.936984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.671 [2024-10-01 15:59:06.936991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.671 [2024-10-01 15:59:06.937211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-10-01 15:59:06.937221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.671 [2024-10-01 15:59:06.937229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.671 [2024-10-01 15:59:06.937720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.671 [2024-10-01 15:59:06.937736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.671 [2024-10-01 15:59:06.938384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.671 [2024-10-01 15:59:06.938399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.671 [2024-10-01 15:59:06.938407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.671 [2024-10-01 15:59:06.938416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.671 [2024-10-01 15:59:06.938422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.671 [2024-10-01 15:59:06.938429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.671 [2024-10-01 15:59:06.938771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.671 [2024-10-01 15:59:06.938783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.671 [2024-10-01 15:59:06.947840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.671 [2024-10-01 15:59:06.947867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.671 [2024-10-01 15:59:06.948109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-10-01 15:59:06.948123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.671 [2024-10-01 15:59:06.948131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.671 [2024-10-01 15:59:06.948327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-10-01 15:59:06.948344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.671 [2024-10-01 15:59:06.948351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.671 [2024-10-01 15:59:06.948363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.671 [2024-10-01 15:59:06.948373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.671 [2024-10-01 15:59:06.948624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.671 [2024-10-01 15:59:06.948636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.671 [2024-10-01 15:59:06.948643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.671 [2024-10-01 15:59:06.948653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.671 [2024-10-01 15:59:06.948660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.671 [2024-10-01 15:59:06.948666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.671 [2024-10-01 15:59:06.949445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.671 [2024-10-01 15:59:06.949460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.671 [2024-10-01 15:59:06.959122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.671 [2024-10-01 15:59:06.959145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.671 [2024-10-01 15:59:06.959392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-10-01 15:59:06.959405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.671 [2024-10-01 15:59:06.959415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.671 [2024-10-01 15:59:06.959658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-10-01 15:59:06.959670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.671 [2024-10-01 15:59:06.959677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.671 [2024-10-01 15:59:06.959689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.671 [2024-10-01 15:59:06.959700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.671 [2024-10-01 15:59:06.959710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.671 [2024-10-01 15:59:06.959716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.671 [2024-10-01 15:59:06.959723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.671 [2024-10-01 15:59:06.959731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.671 [2024-10-01 15:59:06.959737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.671 [2024-10-01 15:59:06.959744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.671 [2024-10-01 15:59:06.959758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.671 [2024-10-01 15:59:06.959765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.671 [2024-10-01 15:59:06.970437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.671 [2024-10-01 15:59:06.970459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.671 [2024-10-01 15:59:06.970618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-10-01 15:59:06.970632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.671 [2024-10-01 15:59:06.970639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.671 [2024-10-01 15:59:06.970837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-10-01 15:59:06.970848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.671 [2024-10-01 15:59:06.970855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.671 [2024-10-01 15:59:06.970871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.671 [2024-10-01 15:59:06.970881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.671 [2024-10-01 15:59:06.970891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.671 [2024-10-01 15:59:06.970898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.671 [2024-10-01 15:59:06.970904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.671 [2024-10-01 15:59:06.970913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.671 [2024-10-01 15:59:06.970919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.671 [2024-10-01 15:59:06.970927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.671 [2024-10-01 15:59:06.970941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.671 [2024-10-01 15:59:06.970948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.671 [2024-10-01 15:59:06.982501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.671 [2024-10-01 15:59:06.982524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.671 [2024-10-01 15:59:06.982779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-10-01 15:59:06.982795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.671 [2024-10-01 15:59:06.982803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.671 [2024-10-01 15:59:06.982898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-10-01 15:59:06.982909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.671 [2024-10-01 15:59:06.982916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.671 [2024-10-01 15:59:06.982928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.671 [2024-10-01 15:59:06.982938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.671 [2024-10-01 15:59:06.982957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.671 [2024-10-01 15:59:06.982964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.671 [2024-10-01 15:59:06.982974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.671 [2024-10-01 15:59:06.982984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.671 [2024-10-01 15:59:06.982990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.671 [2024-10-01 15:59:06.982997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.671 [2024-10-01 15:59:06.983011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.671 [2024-10-01 15:59:06.983018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.671 [2024-10-01 15:59:06.995355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.671 [2024-10-01 15:59:06.995377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.671 [2024-10-01 15:59:06.995538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-10-01 15:59:06.995553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.671 [2024-10-01 15:59:06.995560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.671 [2024-10-01 15:59:06.995778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-10-01 15:59:06.995788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.671 [2024-10-01 15:59:06.995796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.671 [2024-10-01 15:59:06.995808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.671 [2024-10-01 15:59:06.995818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.671 [2024-10-01 15:59:06.995828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.671 [2024-10-01 15:59:06.995834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.671 [2024-10-01 15:59:06.995841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.671 [2024-10-01 15:59:06.995850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.671 [2024-10-01 15:59:06.995856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.671 [2024-10-01 15:59:06.995868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.671 [2024-10-01 15:59:06.995882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.671 [2024-10-01 15:59:06.995889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.671 00:24:57.672 Latency(us) 00:24:57.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.672 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:57.672 Verification LBA range: start 0x0 length 0x4000 00:24:57.672 NVMe0n1 : 15.01 11380.60 44.46 0.00 0.00 11225.58 1997.29 16352.79 00:24:57.672 =================================================================================================================== 00:24:57.672 Total : 11380.60 44.46 0.00 0.00 11225.58 1997.29 16352.79 00:24:57.672 [2024-10-01 15:59:07.006636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.672 [2024-10-01 15:59:07.006659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.672 [2024-10-01 15:59:07.007423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-10-01 15:59:07.007440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.672 [2024-10-01 15:59:07.007448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.672 [2024-10-01 15:59:07.007686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-10-01 15:59:07.007697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.672 [2024-10-01 15:59:07.007704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.672 [2024-10-01 15:59:07.007715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.672 [2024-10-01 15:59:07.007725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.672 [2024-10-01 15:59:07.007734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.672 [2024-10-01 15:59:07.007740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.672 [2024-10-01 15:59:07.007747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.672 [2024-10-01 15:59:07.007755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.672 [2024-10-01 15:59:07.007761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.672 [2024-10-01 15:59:07.007768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.672 [2024-10-01 15:59:07.007778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.672 [2024-10-01 15:59:07.007785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.672 [2024-10-01 15:59:07.016710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.672 [2024-10-01 15:59:07.016731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.672 [2024-10-01 15:59:07.016950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-10-01 15:59:07.016963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.672 [2024-10-01 15:59:07.016970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.672 [2024-10-01 15:59:07.017209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-10-01 15:59:07.017221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.672 [2024-10-01 15:59:07.017228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.672 [2024-10-01 15:59:07.017236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.672 [2024-10-01 15:59:07.017247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.672 [2024-10-01 15:59:07.017255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.672 [2024-10-01 15:59:07.017261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.672 [2024-10-01 15:59:07.017268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.672 [2024-10-01 15:59:07.017277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.672 [2024-10-01 15:59:07.017287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.672 [2024-10-01 15:59:07.017293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.672 [2024-10-01 15:59:07.017300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.672 [2024-10-01 15:59:07.017309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.672 [2024-10-01 15:59:07.026761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.672 [2024-10-01 15:59:07.027006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-10-01 15:59:07.027021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.672 [2024-10-01 15:59:07.027029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.672 [2024-10-01 15:59:07.027044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.672 [2024-10-01 15:59:07.027054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.672 [2024-10-01 15:59:07.027067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.672 [2024-10-01 15:59:07.027073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.672 [2024-10-01 15:59:07.027080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.672 [2024-10-01 15:59:07.027088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.672 [2024-10-01 15:59:07.027334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-10-01 15:59:07.027346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.672 [2024-10-01 15:59:07.027353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.672 [2024-10-01 15:59:07.027362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.672 [2024-10-01 15:59:07.027370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.672 [2024-10-01 15:59:07.027377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.672 [2024-10-01 15:59:07.027385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.672 [2024-10-01 15:59:07.027393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.672 [2024-10-01 15:59:07.036809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.672 [2024-10-01 15:59:07.037055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-10-01 15:59:07.037071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.672 [2024-10-01 15:59:07.037079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.672 [2024-10-01 15:59:07.037089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.672 [2024-10-01 15:59:07.037101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.672 [2024-10-01 15:59:07.037108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.672 [2024-10-01 15:59:07.037114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.672 [2024-10-01 15:59:07.037130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.672 [2024-10-01 15:59:07.037139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.672 [2024-10-01 15:59:07.037288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-10-01 15:59:07.037300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.672 [2024-10-01 15:59:07.037308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.672 [2024-10-01 15:59:07.037316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.672 [2024-10-01 15:59:07.037325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.672 [2024-10-01 15:59:07.037331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.672 [2024-10-01 15:59:07.037339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.672 [2024-10-01 15:59:07.037349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.672 [2024-10-01 15:59:07.046859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.672 [2024-10-01 15:59:07.047103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-10-01 15:59:07.047116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.672 [2024-10-01 15:59:07.047124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.672 [2024-10-01 15:59:07.047134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.672 [2024-10-01 15:59:07.047144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.672 [2024-10-01 15:59:07.047150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.672 [2024-10-01 15:59:07.047158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.672 [2024-10-01 15:59:07.047166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.672 [2024-10-01 15:59:07.047182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.672 [2024-10-01 15:59:07.047402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-10-01 15:59:07.047414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.672 [2024-10-01 15:59:07.047422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.672 [2024-10-01 15:59:07.047432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.672 [2024-10-01 15:59:07.047441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.672 [2024-10-01 15:59:07.047447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.672 [2024-10-01 15:59:07.047453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.672 [2024-10-01 15:59:07.047462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.672 [2024-10-01 15:59:07.056910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.672 [2024-10-01 15:59:07.057087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-10-01 15:59:07.057100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.672 [2024-10-01 15:59:07.057111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.672 [2024-10-01 15:59:07.057122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.672 [2024-10-01 15:59:07.057131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.672 [2024-10-01 15:59:07.057138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.672 [2024-10-01 15:59:07.057145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.672 [2024-10-01 15:59:07.057154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.672 [2024-10-01 15:59:07.057225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.672 [2024-10-01 15:59:07.057453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-10-01 15:59:07.057465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.672 [2024-10-01 15:59:07.057473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.672 [2024-10-01 15:59:07.057482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.672 [2024-10-01 15:59:07.057491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.672 [2024-10-01 15:59:07.057497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.672 [2024-10-01 15:59:07.057504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.672 [2024-10-01 15:59:07.057512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.672 [2024-10-01 15:59:07.066957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.672 [2024-10-01 15:59:07.067186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-10-01 15:59:07.067199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.672 [2024-10-01 15:59:07.067206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.672 [2024-10-01 15:59:07.067216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.672 [2024-10-01 15:59:07.067226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.672 [2024-10-01 15:59:07.067232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.672 [2024-10-01 15:59:07.067239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.672 [2024-10-01 15:59:07.067248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.672 [2024-10-01 15:59:07.067269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.672 [2024-10-01 15:59:07.067489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-10-01 15:59:07.067500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.672 [2024-10-01 15:59:07.067507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.672 [2024-10-01 15:59:07.067517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.672 [2024-10-01 15:59:07.067526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.672 [2024-10-01 15:59:07.067535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.672 [2024-10-01 15:59:07.067542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.672 [2024-10-01 15:59:07.067551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.673 [2024-10-01 15:59:07.077005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.673 [2024-10-01 15:59:07.077233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-10-01 15:59:07.077245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.673 [2024-10-01 15:59:07.077253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.673 [2024-10-01 15:59:07.077263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.673 [2024-10-01 15:59:07.077272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.673 [2024-10-01 15:59:07.077279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.673 [2024-10-01 15:59:07.077285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.673 [2024-10-01 15:59:07.077294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.673 [2024-10-01 15:59:07.077314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.673 [2024-10-01 15:59:07.077475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-10-01 15:59:07.077487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.673 [2024-10-01 15:59:07.077494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.673 [2024-10-01 15:59:07.077503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.673 [2024-10-01 15:59:07.077512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.673 [2024-10-01 15:59:07.077519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.673 [2024-10-01 15:59:07.077526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.673 [2024-10-01 15:59:07.077534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.673 Received shutdown signal, test time was about 15.000000 seconds 00:24:57.673 00:24:57.673 Latency(us) 00:24:57.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.673 =================================================================================================================== 00:24:57.673 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=1 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # false 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # trap - ERR 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # print_backtrace 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # args=('--transport=tcp') 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # local args 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1157 -- # xtrace_disable 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.673 ========== Backtrace start: ========== 00:24:57.673 00:24:57.673 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh:68 -> main(["--transport=tcp"]) 00:24:57.673 ... 00:24:57.673 63 cat $testdir/try.txt 00:24:57.673 64 # if this test fails it means we didn't fail over to the second 00:24:57.673 65 count="$(grep -c "Resetting controller successful" < $testdir/try.txt)" 00:24:57.673 66 00:24:57.673 67 if ((count != 3)); then 00:24:57.673 => 68 false 00:24:57.673 69 fi 00:24:57.673 70 00:24:57.673 71 # Part 2 of the test. Start removing ports, starting with the one we are connected to, confirm that the ctrlr remains active until the final trid is removed. 00:24:57.673 72 $rootdir/build/examples/bdevperf -z -r $bdevperf_rpc_sock -q 128 -o 4096 -w verify -t 1 -f &> $testdir/try.txt & 00:24:57.673 73 bdevperf_pid=$! 00:24:57.673 ... 00:24:57.673 00:24:57.673 ========== Backtrace end ========== 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1194 -- # return 0 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # process_shm --id 0 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@808 -- # type=--id 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@809 -- # id=0 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:57.673 nvmf_trace.0 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@823 -- # return 0 00:24:57.673 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:57.673 [2024-10-01 15:58:50.314557] Starting SPDK v25.01-pre git sha1 3a41ae5b3 / DPDK 24.03.0 initialization... 00:24:57.673 [2024-10-01 15:58:50.314610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2532431 ] 00:24:57.673 [2024-10-01 15:58:50.384715] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.673 [2024-10-01 15:58:50.458233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.673 Running I/O for 15 seconds... 00:24:57.673 11044.00 IOPS, 43.14 MiB/s [2024-10-01 15:58:53.055086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-10-01 15:58:53.055486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-10-01 15:58:53.055501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-10-01 15:58:53.055515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-10-01 15:58:53.055529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-10-01 15:58:53.055543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-10-01 15:58:53.055558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-10-01 15:58:53.055572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.673 [2024-10-01 15:58:53.055585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.673 [2024-10-01 15:58:53.055593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.673 [2024-10-01 15:58:53.055600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-10-01 15:58:53.055615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-10-01 15:58:53.055630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-10-01 15:58:53.055645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-10-01 15:58:53.055661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-10-01 15:58:53.055675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-10-01 15:58:53.055689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-10-01 15:58:53.055704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-10-01 15:58:53.055717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-10-01 15:58:53.055732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-10-01 15:58:53.055747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-10-01 15:58:53.055761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-10-01 15:58:53.055776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-10-01 15:58:53.055790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.674 [2024-10-01 15:58:53.055804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.055818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.055835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.055849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.055869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.055884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.055899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.055914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.055928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.055943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.055958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.055972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.055986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.055994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.674 [2024-10-01 15:58:53.056337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.674 [2024-10-01 15:58:53.056344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.675 [2024-10-01 15:58:53.056681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.675 [2024-10-01 15:58:53.056689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-10-01 15:58:53.056696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.056703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-10-01 15:58:53.056709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.056717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-10-01 15:58:53.056723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.056730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.676 [2024-10-01 15:58:53.056737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.056779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.056787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97760 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.056793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.056803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.056808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.056813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97768 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.056819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.056826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.056831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.056836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97776 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.056842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.056849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.056855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.056861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97784 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.056871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.056877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.056882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.056888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97792 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.056894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.056900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.056906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.056911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97800 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.056917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.056923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.056928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.056934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97808 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.056940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.056946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.056951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.056956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97816 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.056964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.056970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.056976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.056982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97824 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.056988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.056995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.056999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.057005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97832 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.057011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.057017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.057022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.057027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97840 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.057033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.057039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.057044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.057049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97848 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.057055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.057062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.057067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.057072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97856 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.057078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.057084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.057089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.057094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97152 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.057100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.057106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.057111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.057117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97160 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.057123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.057129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.676 [2024-10-01 15:58:53.057134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.676 [2024-10-01 15:58:53.057140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97168 len:8 PRP1 0x0 PRP2 0x0 00:24:57.676 [2024-10-01 15:58:53.057147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.057187] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x992ec0 was disconnected and freed. reset controller. 00:24:57.676 [2024-10-01 15:58:53.057250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.676 [2024-10-01 15:58:53.057261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.057269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.676 [2024-10-01 15:58:53.057275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.057282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.676 [2024-10-01 15:58:53.057289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.676 [2024-10-01 15:58:53.057296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.676 [2024-10-01 15:58:53.057302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.677 [2024-10-01 15:58:53.057309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.677 [2024-10-01 15:58:53.058255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.677 [2024-10-01 15:58:53.058283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.677 [2024-10-01 15:58:53.058489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-10-01 15:58:53.058504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.677 [2024-10-01 15:58:53.058513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.677 [2024-10-01 15:58:53.058525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.677 [2024-10-01 15:58:53.058536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.677 [2024-10-01 15:58:53.058543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.677 [2024-10-01 15:58:53.058551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.677 [2024-10-01 15:58:53.058568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.677 [2024-10-01 15:58:53.070223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.677 [2024-10-01 15:58:53.070507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-10-01 15:58:53.070524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.677 [2024-10-01 15:58:53.070532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.677 [2024-10-01 15:58:53.070663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.677 [2024-10-01 15:58:53.070804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.677 [2024-10-01 15:58:53.070813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.677 [2024-10-01 15:58:53.070825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.677 [2024-10-01 15:58:53.070857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.677 [2024-10-01 15:58:53.081027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.677 [2024-10-01 15:58:53.081278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-10-01 15:58:53.081295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.677 [2024-10-01 15:58:53.081303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.677 [2024-10-01 15:58:53.081315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.677 [2024-10-01 15:58:53.081326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.677 [2024-10-01 15:58:53.081333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.677 [2024-10-01 15:58:53.081339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.677 [2024-10-01 15:58:53.081352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.677 [2024-10-01 15:58:53.093869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.677 [2024-10-01 15:58:53.094049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-10-01 15:58:53.094064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.677 [2024-10-01 15:58:53.094072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.677 [2024-10-01 15:58:53.094083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.677 [2024-10-01 15:58:53.094094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.677 [2024-10-01 15:58:53.094101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.677 [2024-10-01 15:58:53.094107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.677 [2024-10-01 15:58:53.094120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.677 [2024-10-01 15:58:53.106146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.677 [2024-10-01 15:58:53.106414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-10-01 15:58:53.106431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.677 [2024-10-01 15:58:53.106440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.677 [2024-10-01 15:58:53.106594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.677 [2024-10-01 15:58:53.106796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.677 [2024-10-01 15:58:53.106808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.677 [2024-10-01 15:58:53.106815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.677 [2024-10-01 15:58:53.106849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.677 [2024-10-01 15:58:53.117059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.677 [2024-10-01 15:58:53.117321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-10-01 15:58:53.117337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.677 [2024-10-01 15:58:53.117345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.677 [2024-10-01 15:58:53.117357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.677 [2024-10-01 15:58:53.117368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.677 [2024-10-01 15:58:53.117374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.677 [2024-10-01 15:58:53.117381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.677 [2024-10-01 15:58:53.117394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.677 [2024-10-01 15:58:53.128070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.677 [2024-10-01 15:58:53.128240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-10-01 15:58:53.128254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.677 [2024-10-01 15:58:53.128262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.677 [2024-10-01 15:58:53.128274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.677 [2024-10-01 15:58:53.128286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.677 [2024-10-01 15:58:53.128292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.677 [2024-10-01 15:58:53.128299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.677 [2024-10-01 15:58:53.128313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.677 [2024-10-01 15:58:53.139980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.677 [2024-10-01 15:58:53.140266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-10-01 15:58:53.140283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.677 [2024-10-01 15:58:53.140291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.677 [2024-10-01 15:58:53.140452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.677 [2024-10-01 15:58:53.140606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.677 [2024-10-01 15:58:53.140616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.677 [2024-10-01 15:58:53.140625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.677 [2024-10-01 15:58:53.140658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.677 [2024-10-01 15:58:53.151485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.677 [2024-10-01 15:58:53.151668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-10-01 15:58:53.151683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.677 [2024-10-01 15:58:53.151691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.677 [2024-10-01 15:58:53.151706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.677 [2024-10-01 15:58:53.151717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.678 [2024-10-01 15:58:53.151723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.678 [2024-10-01 15:58:53.151730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.678 [2024-10-01 15:58:53.151743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.678 [2024-10-01 15:58:53.162962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.678 [2024-10-01 15:58:53.163140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-10-01 15:58:53.163155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.678 [2024-10-01 15:58:53.163163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.678 [2024-10-01 15:58:53.163176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.678 [2024-10-01 15:58:53.163187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.678 [2024-10-01 15:58:53.163193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.678 [2024-10-01 15:58:53.163199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.678 [2024-10-01 15:58:53.163212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.678 [2024-10-01 15:58:53.174931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.678 [2024-10-01 15:58:53.175243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-10-01 15:58:53.175261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.678 [2024-10-01 15:58:53.175269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.678 [2024-10-01 15:58:53.175305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.678 [2024-10-01 15:58:53.175318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.678 [2024-10-01 15:58:53.175324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.678 [2024-10-01 15:58:53.175331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.678 [2024-10-01 15:58:53.175461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.678 [2024-10-01 15:58:53.186077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.678 [2024-10-01 15:58:53.186306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-10-01 15:58:53.186322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.678 [2024-10-01 15:58:53.186330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.678 [2024-10-01 15:58:53.186476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.678 [2024-10-01 15:58:53.186671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.678 [2024-10-01 15:58:53.186683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.678 [2024-10-01 15:58:53.186689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.678 [2024-10-01 15:58:53.186717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.678 [2024-10-01 15:58:53.196989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.678 [2024-10-01 15:58:53.197218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-10-01 15:58:53.197235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.678 [2024-10-01 15:58:53.197243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.678 [2024-10-01 15:58:53.197255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.678 [2024-10-01 15:58:53.197266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.678 [2024-10-01 15:58:53.197272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.678 [2024-10-01 15:58:53.197279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.678 [2024-10-01 15:58:53.197292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.678 [2024-10-01 15:58:53.207635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.678 [2024-10-01 15:58:53.207809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-10-01 15:58:53.207823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.678 [2024-10-01 15:58:53.207830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.678 [2024-10-01 15:58:53.207842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.678 [2024-10-01 15:58:53.207852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.678 [2024-10-01 15:58:53.207859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.678 [2024-10-01 15:58:53.207872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.678 [2024-10-01 15:58:53.207885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.678 [2024-10-01 15:58:53.218341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.678 [2024-10-01 15:58:53.218518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-10-01 15:58:53.218533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.678 [2024-10-01 15:58:53.218541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.678 [2024-10-01 15:58:53.218553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.678 [2024-10-01 15:58:53.218564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.678 [2024-10-01 15:58:53.218571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.678 [2024-10-01 15:58:53.218577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.678 [2024-10-01 15:58:53.218707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.678 [2024-10-01 15:58:53.228985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.678 [2024-10-01 15:58:53.229236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-10-01 15:58:53.229254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.678 [2024-10-01 15:58:53.229262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.678 [2024-10-01 15:58:53.229274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.678 [2024-10-01 15:58:53.229285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.678 [2024-10-01 15:58:53.229291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.678 [2024-10-01 15:58:53.229298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.678 [2024-10-01 15:58:53.229311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.678 [2024-10-01 15:58:53.241599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.678 [2024-10-01 15:58:53.241830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-10-01 15:58:53.241846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.678 [2024-10-01 15:58:53.241853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.678 [2024-10-01 15:58:53.241871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.678 [2024-10-01 15:58:53.241882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.678 [2024-10-01 15:58:53.241888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.678 [2024-10-01 15:58:53.241895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.678 [2024-10-01 15:58:53.241908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.678 [2024-10-01 15:58:53.253378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.679 [2024-10-01 15:58:53.253501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-10-01 15:58:53.253516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.679 [2024-10-01 15:58:53.253524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.679 [2024-10-01 15:58:53.253535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.679 [2024-10-01 15:58:53.253546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.679 [2024-10-01 15:58:53.253553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.679 [2024-10-01 15:58:53.253559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.679 [2024-10-01 15:58:53.253572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.679 [2024-10-01 15:58:53.264469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.679 [2024-10-01 15:58:53.264576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-10-01 15:58:53.264591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.679 [2024-10-01 15:58:53.264598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.679 [2024-10-01 15:58:53.264610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.679 [2024-10-01 15:58:53.264624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.679 [2024-10-01 15:58:53.264631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.679 [2024-10-01 15:58:53.264637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.679 [2024-10-01 15:58:53.264650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.679 [2024-10-01 15:58:53.274900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.679 [2024-10-01 15:58:53.275075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-10-01 15:58:53.275089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.679 [2024-10-01 15:58:53.275096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.679 [2024-10-01 15:58:53.275108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.679 [2024-10-01 15:58:53.275119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.679 [2024-10-01 15:58:53.275125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.679 [2024-10-01 15:58:53.275132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.679 [2024-10-01 15:58:53.275144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.679 [2024-10-01 15:58:53.286237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.679 [2024-10-01 15:58:53.286380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-10-01 15:58:53.286395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.679 [2024-10-01 15:58:53.286402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.679 [2024-10-01 15:58:53.286415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.679 [2024-10-01 15:58:53.286426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.679 [2024-10-01 15:58:53.286432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.679 [2024-10-01 15:58:53.286439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.679 [2024-10-01 15:58:53.286453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.679 [2024-10-01 15:58:53.297394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.679 [2024-10-01 15:58:53.297578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-10-01 15:58:53.297594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.679 [2024-10-01 15:58:53.297601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.679 [2024-10-01 15:58:53.297612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.679 [2024-10-01 15:58:53.297624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.679 [2024-10-01 15:58:53.297630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.679 [2024-10-01 15:58:53.297637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.679 [2024-10-01 15:58:53.297650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.679 [2024-10-01 15:58:53.309976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.679 [2024-10-01 15:58:53.310242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-10-01 15:58:53.310260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.679 [2024-10-01 15:58:53.310268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.679 [2024-10-01 15:58:53.310619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.679 [2024-10-01 15:58:53.310775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.679 [2024-10-01 15:58:53.310786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.679 [2024-10-01 15:58:53.310793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.679 [2024-10-01 15:58:53.310824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.679 [2024-10-01 15:58:53.321211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.679 [2024-10-01 15:58:53.321476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-10-01 15:58:53.321494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.679 [2024-10-01 15:58:53.321503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.679 [2024-10-01 15:58:53.321532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.679 [2024-10-01 15:58:53.321544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.679 [2024-10-01 15:58:53.321551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.679 [2024-10-01 15:58:53.321557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.679 [2024-10-01 15:58:53.321570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.679 [2024-10-01 15:58:53.332686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.679 [2024-10-01 15:58:53.332809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-10-01 15:58:53.332823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.679 [2024-10-01 15:58:53.332831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.679 [2024-10-01 15:58:53.333086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.679 [2024-10-01 15:58:53.333229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.679 [2024-10-01 15:58:53.333240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.680 [2024-10-01 15:58:53.333246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.680 [2024-10-01 15:58:53.333385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.680 [2024-10-01 15:58:53.344309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.680 [2024-10-01 15:58:53.344432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-10-01 15:58:53.344446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.680 [2024-10-01 15:58:53.344457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.680 [2024-10-01 15:58:53.344469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.680 [2024-10-01 15:58:53.344479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.680 [2024-10-01 15:58:53.344485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.680 [2024-10-01 15:58:53.344492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.680 [2024-10-01 15:58:53.344505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.680 [2024-10-01 15:58:53.356675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.680 [2024-10-01 15:58:53.356954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-10-01 15:58:53.356972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.680 [2024-10-01 15:58:53.356980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.680 [2024-10-01 15:58:53.357122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.680 [2024-10-01 15:58:53.357152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.680 [2024-10-01 15:58:53.357159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.680 [2024-10-01 15:58:53.357166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.680 [2024-10-01 15:58:53.357179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.680 [2024-10-01 15:58:53.366774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.680 [2024-10-01 15:58:53.366886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-10-01 15:58:53.366901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.680 [2024-10-01 15:58:53.366909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.680 [2024-10-01 15:58:53.366920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.680 [2024-10-01 15:58:53.366931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.680 [2024-10-01 15:58:53.366937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.680 [2024-10-01 15:58:53.366944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.680 [2024-10-01 15:58:53.366957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.680 [2024-10-01 15:58:53.379566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.680 [2024-10-01 15:58:53.379972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-10-01 15:58:53.379991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.680 [2024-10-01 15:58:53.379999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.680 [2024-10-01 15:58:53.380142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.680 [2024-10-01 15:58:53.380490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.680 [2024-10-01 15:58:53.380502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.680 [2024-10-01 15:58:53.380512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.680 [2024-10-01 15:58:53.380668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.680 [2024-10-01 15:58:53.390047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.680 [2024-10-01 15:58:53.390183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-10-01 15:58:53.390198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.680 [2024-10-01 15:58:53.390206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.680 [2024-10-01 15:58:53.390217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.680 [2024-10-01 15:58:53.390228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.680 [2024-10-01 15:58:53.390235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.680 [2024-10-01 15:58:53.390241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.680 [2024-10-01 15:58:53.390256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.680 [2024-10-01 15:58:53.401523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.680 [2024-10-01 15:58:53.401630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-10-01 15:58:53.401644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.680 [2024-10-01 15:58:53.401652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.680 [2024-10-01 15:58:53.401663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.680 [2024-10-01 15:58:53.401674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.680 [2024-10-01 15:58:53.401680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.680 [2024-10-01 15:58:53.401687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.680 [2024-10-01 15:58:53.401700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.680 [2024-10-01 15:58:53.414285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.680 [2024-10-01 15:58:53.414639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-10-01 15:58:53.414657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.680 [2024-10-01 15:58:53.414665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.680 [2024-10-01 15:58:53.414779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.680 [2024-10-01 15:58:53.414961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.680 [2024-10-01 15:58:53.414972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.680 [2024-10-01 15:58:53.414979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.680 [2024-10-01 15:58:53.415009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.680 [2024-10-01 15:58:53.425231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.680 [2024-10-01 15:58:53.425374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-10-01 15:58:53.425390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.680 [2024-10-01 15:58:53.425398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.680 [2024-10-01 15:58:53.425410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.680 [2024-10-01 15:58:53.425420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.680 [2024-10-01 15:58:53.425426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.680 [2024-10-01 15:58:53.425433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.681 [2024-10-01 15:58:53.425625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.681 [2024-10-01 15:58:53.436217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.681 [2024-10-01 15:58:53.436372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.681 [2024-10-01 15:58:53.436388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.681 [2024-10-01 15:58:53.436395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.681 [2024-10-01 15:58:53.436555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.681 [2024-10-01 15:58:53.436589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.681 [2024-10-01 15:58:53.436596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.681 [2024-10-01 15:58:53.436602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.681 [2024-10-01 15:58:53.436616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.681 [2024-10-01 15:58:53.447857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.681 [2024-10-01 15:58:53.448118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.681 [2024-10-01 15:58:53.448135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.681 [2024-10-01 15:58:53.448143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.681 [2024-10-01 15:58:53.448172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.681 [2024-10-01 15:58:53.448184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.681 [2024-10-01 15:58:53.448190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.681 [2024-10-01 15:58:53.448197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.681 [2024-10-01 15:58:53.448211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.681 [2024-10-01 15:58:53.459761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.681 [2024-10-01 15:58:53.459897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.681 [2024-10-01 15:58:53.459913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.681 [2024-10-01 15:58:53.459921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.681 [2024-10-01 15:58:53.459936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.681 [2024-10-01 15:58:53.459947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.681 [2024-10-01 15:58:53.459953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.681 [2024-10-01 15:58:53.459960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.681 [2024-10-01 15:58:53.459972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.681 [2024-10-01 15:58:53.471739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.681 [2024-10-01 15:58:53.471886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.681 [2024-10-01 15:58:53.471903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.681 [2024-10-01 15:58:53.471910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.681 [2024-10-01 15:58:53.472247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.681 [2024-10-01 15:58:53.472403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.681 [2024-10-01 15:58:53.472414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.681 [2024-10-01 15:58:53.472420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.681 [2024-10-01 15:58:53.472452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.681 [2024-10-01 15:58:53.482980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.681 [2024-10-01 15:58:53.483109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.681 [2024-10-01 15:58:53.483125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.681 [2024-10-01 15:58:53.483132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.681 [2024-10-01 15:58:53.483468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.681 [2024-10-01 15:58:53.483623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.681 [2024-10-01 15:58:53.483633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.681 [2024-10-01 15:58:53.483640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.681 [2024-10-01 15:58:53.483672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.681 [2024-10-01 15:58:53.493047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.681 [2024-10-01 15:58:53.493177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.681 [2024-10-01 15:58:53.493192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.681 [2024-10-01 15:58:53.493199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.681 [2024-10-01 15:58:53.493335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.681 [2024-10-01 15:58:53.493407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.681 [2024-10-01 15:58:53.493416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.681 [2024-10-01 15:58:53.493427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.681 [2024-10-01 15:58:53.493451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.681 [2024-10-01 15:58:53.503985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.681 [2024-10-01 15:58:53.504195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.681 [2024-10-01 15:58:53.504211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.681 [2024-10-01 15:58:53.504218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.681 [2024-10-01 15:58:53.504393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.681 [2024-10-01 15:58:53.504609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.681 [2024-10-01 15:58:53.504621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.681 [2024-10-01 15:58:53.504627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.681 [2024-10-01 15:58:53.504657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.681 [2024-10-01 15:58:53.514340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.681 [2024-10-01 15:58:53.514462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.681 [2024-10-01 15:58:53.514477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.681 [2024-10-01 15:58:53.514485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.681 [2024-10-01 15:58:53.514497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.681 [2024-10-01 15:58:53.514508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.681 [2024-10-01 15:58:53.514514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.681 [2024-10-01 15:58:53.514521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.681 [2024-10-01 15:58:53.514534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.681 [2024-10-01 15:58:53.526330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.681 [2024-10-01 15:58:53.526463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.681 [2024-10-01 15:58:53.526477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.681 [2024-10-01 15:58:53.526485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.681 [2024-10-01 15:58:53.526497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.682 [2024-10-01 15:58:53.526508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.682 [2024-10-01 15:58:53.526514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.682 [2024-10-01 15:58:53.526520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.682 [2024-10-01 15:58:53.526533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.682 [2024-10-01 15:58:53.536720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.682 [2024-10-01 15:58:53.536841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.682 [2024-10-01 15:58:53.536860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.682 [2024-10-01 15:58:53.536872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.682 [2024-10-01 15:58:53.536884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.682 [2024-10-01 15:58:53.536895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.682 [2024-10-01 15:58:53.536901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.682 [2024-10-01 15:58:53.536907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.682 [2024-10-01 15:58:53.536920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.682 [2024-10-01 15:58:53.548697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.682 [2024-10-01 15:58:53.548872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.682 [2024-10-01 15:58:53.548886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.682 [2024-10-01 15:58:53.548894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.682 [2024-10-01 15:58:53.548905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.682 [2024-10-01 15:58:53.548916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.682 [2024-10-01 15:58:53.548922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.682 [2024-10-01 15:58:53.548929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.682 [2024-10-01 15:58:53.548942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.682 [2024-10-01 15:58:53.559310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.682 [2024-10-01 15:58:53.559504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.682 [2024-10-01 15:58:53.559519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.682 [2024-10-01 15:58:53.559526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.682 [2024-10-01 15:58:53.559538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.682 [2024-10-01 15:58:53.559548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.682 [2024-10-01 15:58:53.559554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.682 [2024-10-01 15:58:53.559561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.682 [2024-10-01 15:58:53.559574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.682 [2024-10-01 15:58:53.569845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.682 [2024-10-01 15:58:53.570097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.682 [2024-10-01 15:58:53.570113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.682 [2024-10-01 15:58:53.570121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.682 [2024-10-01 15:58:53.570250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.682 [2024-10-01 15:58:53.570396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.682 [2024-10-01 15:58:53.570407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.682 [2024-10-01 15:58:53.570414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.682 [2024-10-01 15:58:53.570443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.682 [2024-10-01 15:58:53.580847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.682 [2024-10-01 15:58:53.581103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.682 [2024-10-01 15:58:53.581119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.682 [2024-10-01 15:58:53.581126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.682 [2024-10-01 15:58:53.581139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.682 [2024-10-01 15:58:53.581150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.682 [2024-10-01 15:58:53.581156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.682 [2024-10-01 15:58:53.581163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.682 [2024-10-01 15:58:53.581176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.682 [2024-10-01 15:58:53.592101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.682 [2024-10-01 15:58:53.592344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.682 [2024-10-01 15:58:53.592359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.682 [2024-10-01 15:58:53.592367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.682 [2024-10-01 15:58:53.592380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.682 [2024-10-01 15:58:53.592390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.682 [2024-10-01 15:58:53.592397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.682 [2024-10-01 15:58:53.592403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.682 [2024-10-01 15:58:53.592416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.682 [2024-10-01 15:58:53.603166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.682 [2024-10-01 15:58:53.603383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.682 [2024-10-01 15:58:53.603400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.682 [2024-10-01 15:58:53.603407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.682 [2024-10-01 15:58:53.603536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.682 [2024-10-01 15:58:53.603574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.682 [2024-10-01 15:58:53.603582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.682 [2024-10-01 15:58:53.603589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.682 [2024-10-01 15:58:53.603716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.682 [2024-10-01 15:58:53.613766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.682 [2024-10-01 15:58:53.613921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.682 [2024-10-01 15:58:53.613937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.682 [2024-10-01 15:58:53.613944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.682 [2024-10-01 15:58:53.613956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.682 [2024-10-01 15:58:53.613966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.682 [2024-10-01 15:58:53.613973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.682 [2024-10-01 15:58:53.613979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.682 [2024-10-01 15:58:53.613992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.683 [2024-10-01 15:58:53.626698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.683 [2024-10-01 15:58:53.627410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.683 [2024-10-01 15:58:53.627430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.683 [2024-10-01 15:58:53.627438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.683 [2024-10-01 15:58:53.627738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.683 [2024-10-01 15:58:53.627900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.683 [2024-10-01 15:58:53.627911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.683 [2024-10-01 15:58:53.627917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.683 [2024-10-01 15:58:53.627949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.683 [2024-10-01 15:58:53.637756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.683 [2024-10-01 15:58:53.638105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.683 [2024-10-01 15:58:53.638123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.683 [2024-10-01 15:58:53.638130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.683 [2024-10-01 15:58:53.638274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.683 [2024-10-01 15:58:53.638303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.683 [2024-10-01 15:58:53.638310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.683 [2024-10-01 15:58:53.638317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.683 [2024-10-01 15:58:53.638331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.683 [2024-10-01 15:58:53.648718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.683 [2024-10-01 15:58:53.649084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.683 [2024-10-01 15:58:53.649102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.683 [2024-10-01 15:58:53.649113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.683 [2024-10-01 15:58:53.649256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.683 [2024-10-01 15:58:53.649282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.683 [2024-10-01 15:58:53.649289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.683 [2024-10-01 15:58:53.649295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.683 [2024-10-01 15:58:53.649309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.683 [2024-10-01 15:58:53.660263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.683 [2024-10-01 15:58:53.660432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.683 [2024-10-01 15:58:53.660446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.683 [2024-10-01 15:58:53.660454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.683 [2024-10-01 15:58:53.660466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.683 [2024-10-01 15:58:53.660477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.683 [2024-10-01 15:58:53.660483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.683 [2024-10-01 15:58:53.660489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.683 [2024-10-01 15:58:53.660502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.683 [2024-10-01 15:58:53.671879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.683 [2024-10-01 15:58:53.672104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.683 [2024-10-01 15:58:53.672119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.683 [2024-10-01 15:58:53.672127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.683 [2024-10-01 15:58:53.672138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.683 [2024-10-01 15:58:53.672149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.683 [2024-10-01 15:58:53.672156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.683 [2024-10-01 15:58:53.672162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.683 [2024-10-01 15:58:53.672176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.683 [2024-10-01 15:58:53.683634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.683 [2024-10-01 15:58:53.683861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.683 [2024-10-01 15:58:53.683881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.683 [2024-10-01 15:58:53.683889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.683 [2024-10-01 15:58:53.683901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.683 [2024-10-01 15:58:53.683912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.683 [2024-10-01 15:58:53.683921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.683 [2024-10-01 15:58:53.683928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.683 [2024-10-01 15:58:53.683941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.683 [2024-10-01 15:58:53.695329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.683 [2024-10-01 15:58:53.695553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.683 [2024-10-01 15:58:53.695569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.683 [2024-10-01 15:58:53.695576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.683 [2024-10-01 15:58:53.695588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.683 [2024-10-01 15:58:53.695598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.683 [2024-10-01 15:58:53.695605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.683 [2024-10-01 15:58:53.695612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.683 [2024-10-01 15:58:53.695625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.683 [2024-10-01 15:58:53.706983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.683 [2024-10-01 15:58:53.707204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.683 [2024-10-01 15:58:53.707219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.683 [2024-10-01 15:58:53.707227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.683 [2024-10-01 15:58:53.707239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.683 [2024-10-01 15:58:53.707250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.683 [2024-10-01 15:58:53.707256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.683 [2024-10-01 15:58:53.707262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.683 [2024-10-01 15:58:53.707275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.683 [2024-10-01 15:58:53.719010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.683 [2024-10-01 15:58:53.719206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.683 [2024-10-01 15:58:53.719229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.683 [2024-10-01 15:58:53.719237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.683 [2024-10-01 15:58:53.719572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.683 [2024-10-01 15:58:53.719738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.683 [2024-10-01 15:58:53.719749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.683 [2024-10-01 15:58:53.719755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.683 [2024-10-01 15:58:53.719904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.684 [2024-10-01 15:58:53.730211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.684 [2024-10-01 15:58:53.730411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.684 [2024-10-01 15:58:53.730435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.684 [2024-10-01 15:58:53.730443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.684 [2024-10-01 15:58:53.730572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.684 [2024-10-01 15:58:53.730601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.684 [2024-10-01 15:58:53.730609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.684 [2024-10-01 15:58:53.730615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.684 [2024-10-01 15:58:53.730629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.684 [2024-10-01 15:58:53.741129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.684 [2024-10-01 15:58:53.741371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.684 [2024-10-01 15:58:53.741388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.684 [2024-10-01 15:58:53.741396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.684 [2024-10-01 15:58:53.741525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.684 [2024-10-01 15:58:53.741563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.684 [2024-10-01 15:58:53.741571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.684 [2024-10-01 15:58:53.741577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.684 [2024-10-01 15:58:53.741706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.684 [2024-10-01 15:58:53.751779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.684 [2024-10-01 15:58:53.752029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.684 [2024-10-01 15:58:53.752048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.684 [2024-10-01 15:58:53.752055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.684 [2024-10-01 15:58:53.752147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.684 [2024-10-01 15:58:53.752211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.684 [2024-10-01 15:58:53.752218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.684 [2024-10-01 15:58:53.752224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.684 [2024-10-01 15:58:53.752349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.684 [2024-10-01 15:58:53.762156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.684 [2024-10-01 15:58:53.762326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.684 [2024-10-01 15:58:53.762341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.684 [2024-10-01 15:58:53.762349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.684 [2024-10-01 15:58:53.762364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.684 [2024-10-01 15:58:53.762375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.684 [2024-10-01 15:58:53.762381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.684 [2024-10-01 15:58:53.762387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.684 [2024-10-01 15:58:53.762402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.684 [2024-10-01 15:58:53.772222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.684 [2024-10-01 15:58:53.772456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.684 [2024-10-01 15:58:53.772471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.684 [2024-10-01 15:58:53.772478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.684 [2024-10-01 15:58:53.772490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.684 [2024-10-01 15:58:53.772501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.684 [2024-10-01 15:58:53.772507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.684 [2024-10-01 15:58:53.772513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.684 [2024-10-01 15:58:53.772527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.684 [2024-10-01 15:58:53.782961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.684 [2024-10-01 15:58:53.783154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.684 [2024-10-01 15:58:53.783168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.684 [2024-10-01 15:58:53.783175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.684 [2024-10-01 15:58:53.783188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.684 [2024-10-01 15:58:53.783198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.684 [2024-10-01 15:58:53.783205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.684 [2024-10-01 15:58:53.783211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.684 [2024-10-01 15:58:53.783224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.684 [2024-10-01 15:58:53.794984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.684 [2024-10-01 15:58:53.795146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.684 [2024-10-01 15:58:53.795160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.684 [2024-10-01 15:58:53.795167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.684 [2024-10-01 15:58:53.795179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.684 [2024-10-01 15:58:53.795190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.684 [2024-10-01 15:58:53.795197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.684 [2024-10-01 15:58:53.795206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.684 [2024-10-01 15:58:53.795219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.684 [2024-10-01 15:58:53.805050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.684 [2024-10-01 15:58:53.805299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.684 [2024-10-01 15:58:53.805315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.684 [2024-10-01 15:58:53.805322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.684 [2024-10-01 15:58:53.805334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.684 [2024-10-01 15:58:53.805344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.684 [2024-10-01 15:58:53.805351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.684 [2024-10-01 15:58:53.805358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.684 [2024-10-01 15:58:53.805370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.684 [2024-10-01 15:58:53.815872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.684 [2024-10-01 15:58:53.816117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.685 [2024-10-01 15:58:53.816132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.685 [2024-10-01 15:58:53.816139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.685 [2024-10-01 15:58:53.816152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.685 [2024-10-01 15:58:53.816169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.685 [2024-10-01 15:58:53.816175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.685 [2024-10-01 15:58:53.816182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.685 [2024-10-01 15:58:53.816195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.685 [2024-10-01 15:58:53.826662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.685 [2024-10-01 15:58:53.826856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.685 [2024-10-01 15:58:53.826878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.685 [2024-10-01 15:58:53.826886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.685 [2024-10-01 15:58:53.827016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.685 [2024-10-01 15:58:53.827047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.685 [2024-10-01 15:58:53.827055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.685 [2024-10-01 15:58:53.827062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.685 [2024-10-01 15:58:53.827076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.685 [2024-10-01 15:58:53.837983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.685 [2024-10-01 15:58:53.838313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.685 [2024-10-01 15:58:53.838335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.685 [2024-10-01 15:58:53.838343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.685 [2024-10-01 15:58:53.838372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.685 [2024-10-01 15:58:53.838383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.685 [2024-10-01 15:58:53.838389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.685 [2024-10-01 15:58:53.838396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.685 [2024-10-01 15:58:53.838649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.685 [2024-10-01 15:58:53.851357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.685 [2024-10-01 15:58:53.852130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.685 [2024-10-01 15:58:53.852150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.685 [2024-10-01 15:58:53.852159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.685 [2024-10-01 15:58:53.852442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.685 [2024-10-01 15:58:53.852490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.685 [2024-10-01 15:58:53.852498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.685 [2024-10-01 15:58:53.852505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.685 [2024-10-01 15:58:53.852519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.685 [2024-10-01 15:58:53.862237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.685 [2024-10-01 15:58:53.862487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.685 [2024-10-01 15:58:53.862503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.685 [2024-10-01 15:58:53.862511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.685 [2024-10-01 15:58:53.862523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.685 [2024-10-01 15:58:53.862534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.685 [2024-10-01 15:58:53.862540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.685 [2024-10-01 15:58:53.862547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.685 [2024-10-01 15:58:53.862560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.685 [2024-10-01 15:58:53.874102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.685 [2024-10-01 15:58:53.874327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.685 [2024-10-01 15:58:53.874342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.685 [2024-10-01 15:58:53.874350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.685 [2024-10-01 15:58:53.874362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.685 [2024-10-01 15:58:53.874376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.685 [2024-10-01 15:58:53.874382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.685 [2024-10-01 15:58:53.874388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.685 [2024-10-01 15:58:53.874401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.685 [2024-10-01 15:58:53.885860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.685 [2024-10-01 15:58:53.886109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.685 [2024-10-01 15:58:53.886125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.685 [2024-10-01 15:58:53.886133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.685 [2024-10-01 15:58:53.886145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.685 [2024-10-01 15:58:53.886156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.685 [2024-10-01 15:58:53.886162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.685 [2024-10-01 15:58:53.886168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.685 [2024-10-01 15:58:53.886181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.685 [2024-10-01 15:58:53.897419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.685 [2024-10-01 15:58:53.897692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.685 [2024-10-01 15:58:53.897708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.685 [2024-10-01 15:58:53.897716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.685 [2024-10-01 15:58:53.898590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.685 [2024-10-01 15:58:53.899131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.685 [2024-10-01 15:58:53.899143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.686 [2024-10-01 15:58:53.899150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.686 [2024-10-01 15:58:53.899343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.686 [2024-10-01 15:58:53.909840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.686 [2024-10-01 15:58:53.910218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.686 [2024-10-01 15:58:53.910237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.686 [2024-10-01 15:58:53.910245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.686 [2024-10-01 15:58:53.910418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.686 [2024-10-01 15:58:53.910563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.686 [2024-10-01 15:58:53.910573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.686 [2024-10-01 15:58:53.910579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.686 [2024-10-01 15:58:53.910611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.686 [2024-10-01 15:58:53.920521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.686 [2024-10-01 15:58:53.920825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.686 [2024-10-01 15:58:53.920843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.686 [2024-10-01 15:58:53.920851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.686 [2024-10-01 15:58:53.920998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.686 [2024-10-01 15:58:53.921037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.686 [2024-10-01 15:58:53.921045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.686 [2024-10-01 15:58:53.921051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.686 [2024-10-01 15:58:53.921065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.686 [2024-10-01 15:58:53.931073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.686 [2024-10-01 15:58:53.931318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.686 [2024-10-01 15:58:53.931334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.686 [2024-10-01 15:58:53.931341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.686 [2024-10-01 15:58:53.931353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.686 [2024-10-01 15:58:53.931364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.686 [2024-10-01 15:58:53.931370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.686 [2024-10-01 15:58:53.931376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.686 [2024-10-01 15:58:53.931389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.686 [2024-10-01 15:58:53.943782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.686 [2024-10-01 15:58:53.943960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.686 [2024-10-01 15:58:53.943976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.686 [2024-10-01 15:58:53.943983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.686 [2024-10-01 15:58:53.943996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.686 [2024-10-01 15:58:53.944007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.686 [2024-10-01 15:58:53.944013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.686 [2024-10-01 15:58:53.944020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.686 [2024-10-01 15:58:53.944033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.686 [2024-10-01 15:58:53.954479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.686 [2024-10-01 15:58:53.954673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.686 [2024-10-01 15:58:53.954688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.686 [2024-10-01 15:58:53.954699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.686 [2024-10-01 15:58:53.954711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.686 [2024-10-01 15:58:53.954721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.686 [2024-10-01 15:58:53.954728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.686 [2024-10-01 15:58:53.954734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.686 [2024-10-01 15:58:53.954747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.686 [2024-10-01 15:58:53.966301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.686 11224.50 IOPS, 43.85 MiB/s [2024-10-01 15:58:53.967846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.686 [2024-10-01 15:58:53.967867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.686 [2024-10-01 15:58:53.967875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.686 [2024-10-01 15:58:53.968836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.686 [2024-10-01 15:58:53.969537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.686 [2024-10-01 15:58:53.969550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.686 [2024-10-01 15:58:53.969556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.686 [2024-10-01 15:58:53.969751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.686 [2024-10-01 15:58:53.977889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.686 [2024-10-01 15:58:53.978080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.686 [2024-10-01 15:58:53.978093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.686 [2024-10-01 15:58:53.978101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.686 [2024-10-01 15:58:53.978112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.686 [2024-10-01 15:58:53.978123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.686 [2024-10-01 15:58:53.978129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.686 [2024-10-01 15:58:53.978136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.686 [2024-10-01 15:58:53.979014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.686 [2024-10-01 15:58:53.990075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.686 [2024-10-01 15:58:53.990421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.686 [2024-10-01 15:58:53.990439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.686 [2024-10-01 15:58:53.990447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.686 [2024-10-01 15:58:53.990591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.686 [2024-10-01 15:58:53.990628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.686 [2024-10-01 15:58:53.990641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.686 [2024-10-01 15:58:53.990647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.686 [2024-10-01 15:58:53.990661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.686 [2024-10-01 15:58:54.001002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.686 [2024-10-01 15:58:54.001360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.686 [2024-10-01 15:58:54.001378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.687 [2024-10-01 15:58:54.001386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.687 [2024-10-01 15:58:54.001528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.687 [2024-10-01 15:58:54.001558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.687 [2024-10-01 15:58:54.001565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.687 [2024-10-01 15:58:54.001571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.687 [2024-10-01 15:58:54.001586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.687 [2024-10-01 15:58:54.013043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.687 [2024-10-01 15:58:54.013262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.687 [2024-10-01 15:58:54.013283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.687 [2024-10-01 15:58:54.013290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.687 [2024-10-01 15:58:54.013302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.687 [2024-10-01 15:58:54.013313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.687 [2024-10-01 15:58:54.013320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.687 [2024-10-01 15:58:54.013326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.687 [2024-10-01 15:58:54.013339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.687 [2024-10-01 15:58:54.024390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.687 [2024-10-01 15:58:54.024623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.687 [2024-10-01 15:58:54.024639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.687 [2024-10-01 15:58:54.024647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.687 [2024-10-01 15:58:54.024659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.687 [2024-10-01 15:58:54.024669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.687 [2024-10-01 15:58:54.024676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.687 [2024-10-01 15:58:54.024682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.687 [2024-10-01 15:58:54.024695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.687 [2024-10-01 15:58:54.036017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.687 [2024-10-01 15:58:54.036440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.687 [2024-10-01 15:58:54.036458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.687 [2024-10-01 15:58:54.036466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.687 [2024-10-01 15:58:54.036612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.687 [2024-10-01 15:58:54.036642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.687 [2024-10-01 15:58:54.036649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.687 [2024-10-01 15:58:54.036656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.687 [2024-10-01 15:58:54.036671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.687 [2024-10-01 15:58:54.048272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.687 [2024-10-01 15:58:54.048521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.687 [2024-10-01 15:58:54.048536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.687 [2024-10-01 15:58:54.048544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.687 [2024-10-01 15:58:54.048556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.687 [2024-10-01 15:58:54.048567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.687 [2024-10-01 15:58:54.048573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.687 [2024-10-01 15:58:54.048579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.687 [2024-10-01 15:58:54.048592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.687 [2024-10-01 15:58:54.060658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.687 [2024-10-01 15:58:54.061018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.687 [2024-10-01 15:58:54.061037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.687 [2024-10-01 15:58:54.061044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.687 [2024-10-01 15:58:54.061393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.687 [2024-10-01 15:58:54.061561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.687 [2024-10-01 15:58:54.061573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.687 [2024-10-01 15:58:54.061580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.687 [2024-10-01 15:58:54.061611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.687 [2024-10-01 15:58:54.071910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.687 [2024-10-01 15:58:54.072136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.687 [2024-10-01 15:58:54.072152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.687 [2024-10-01 15:58:54.072159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.687 [2024-10-01 15:58:54.072175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.687 [2024-10-01 15:58:54.072185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.687 [2024-10-01 15:58:54.072191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.687 [2024-10-01 15:58:54.072198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.687 [2024-10-01 15:58:54.072211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.687 [2024-10-01 15:58:54.084201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.687 [2024-10-01 15:58:54.084575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.687 [2024-10-01 15:58:54.084593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.687 [2024-10-01 15:58:54.084601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.687 [2024-10-01 15:58:54.084803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.687 [2024-10-01 15:58:54.084839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.687 [2024-10-01 15:58:54.084846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.687 [2024-10-01 15:58:54.084853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.687 [2024-10-01 15:58:54.084987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.687 [2024-10-01 15:58:54.095417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.687 [2024-10-01 15:58:54.095640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.687 [2024-10-01 15:58:54.095655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.687 [2024-10-01 15:58:54.095663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.687 [2024-10-01 15:58:54.096006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.687 [2024-10-01 15:58:54.096168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.687 [2024-10-01 15:58:54.096178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.687 [2024-10-01 15:58:54.096185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.688 [2024-10-01 15:58:54.096216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.688 [2024-10-01 15:58:54.106827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.688 [2024-10-01 15:58:54.107095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.688 [2024-10-01 15:58:54.107111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.688 [2024-10-01 15:58:54.107119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.688 [2024-10-01 15:58:54.107131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.688 [2024-10-01 15:58:54.107142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.688 [2024-10-01 15:58:54.107148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.688 [2024-10-01 15:58:54.107158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.688 [2024-10-01 15:58:54.107171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.688 [2024-10-01 15:58:54.117562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.688 [2024-10-01 15:58:54.117800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.688 [2024-10-01 15:58:54.117815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.688 [2024-10-01 15:58:54.117823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.688 [2024-10-01 15:58:54.117835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.688 [2024-10-01 15:58:54.117846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.688 [2024-10-01 15:58:54.117852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.688 [2024-10-01 15:58:54.117858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.688 [2024-10-01 15:58:54.117878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.688 [2024-10-01 15:58:54.129674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.688 [2024-10-01 15:58:54.129918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.688 [2024-10-01 15:58:54.129934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.688 [2024-10-01 15:58:54.129942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.688 [2024-10-01 15:58:54.129954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.688 [2024-10-01 15:58:54.129972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.688 [2024-10-01 15:58:54.129979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.688 [2024-10-01 15:58:54.129986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.688 [2024-10-01 15:58:54.129999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.688 [2024-10-01 15:58:54.141867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.688 [2024-10-01 15:58:54.142120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.688 [2024-10-01 15:58:54.142136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.688 [2024-10-01 15:58:54.142143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.688 [2024-10-01 15:58:54.142156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.688 [2024-10-01 15:58:54.142166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.688 [2024-10-01 15:58:54.142172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.688 [2024-10-01 15:58:54.142178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.688 [2024-10-01 15:58:54.142192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.688 [2024-10-01 15:58:54.153765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.688 [2024-10-01 15:58:54.154070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.688 [2024-10-01 15:58:54.154092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.688 [2024-10-01 15:58:54.154100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.688 [2024-10-01 15:58:54.154129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.688 [2024-10-01 15:58:54.154141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.688 [2024-10-01 15:58:54.154147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.688 [2024-10-01 15:58:54.154153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.688 [2024-10-01 15:58:54.154166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.688 [2024-10-01 15:58:54.164459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.688 [2024-10-01 15:58:54.164712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.688 [2024-10-01 15:58:54.164727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.688 [2024-10-01 15:58:54.164735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.688 [2024-10-01 15:58:54.164747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.688 [2024-10-01 15:58:54.164758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.688 [2024-10-01 15:58:54.164764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.688 [2024-10-01 15:58:54.164770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.688 [2024-10-01 15:58:54.164783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.688 [2024-10-01 15:58:54.177240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.688 [2024-10-01 15:58:54.177560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.688 [2024-10-01 15:58:54.177579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.688 [2024-10-01 15:58:54.177586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.688 [2024-10-01 15:58:54.177760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.688 [2024-10-01 15:58:54.177793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.688 [2024-10-01 15:58:54.177801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.688 [2024-10-01 15:58:54.177807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.688 [2024-10-01 15:58:54.177821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.688 [2024-10-01 15:58:54.188833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.688 [2024-10-01 15:58:54.189149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.688 [2024-10-01 15:58:54.189167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.688 [2024-10-01 15:58:54.189175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.688 [2024-10-01 15:58:54.189316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.688 [2024-10-01 15:58:54.189361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.688 [2024-10-01 15:58:54.189371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.688 [2024-10-01 15:58:54.189380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.688 [2024-10-01 15:58:54.189395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.688 [2024-10-01 15:58:54.199453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.688 [2024-10-01 15:58:54.199610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.688 [2024-10-01 15:58:54.199625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.688 [2024-10-01 15:58:54.199633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.688 [2024-10-01 15:58:54.199645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.688 [2024-10-01 15:58:54.199656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.688 [2024-10-01 15:58:54.199663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.688 [2024-10-01 15:58:54.199670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.688 [2024-10-01 15:58:54.199683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.688 [2024-10-01 15:58:54.209657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.688 [2024-10-01 15:58:54.211949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.689 [2024-10-01 15:58:54.211971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.689 [2024-10-01 15:58:54.211979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.689 [2024-10-01 15:58:54.212583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.689 [2024-10-01 15:58:54.212932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.689 [2024-10-01 15:58:54.212944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.689 [2024-10-01 15:58:54.212952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.689 [2024-10-01 15:58:54.213105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.689 [2024-10-01 15:58:54.223608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.689 [2024-10-01 15:58:54.223762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.689 [2024-10-01 15:58:54.223776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.689 [2024-10-01 15:58:54.223784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.689 [2024-10-01 15:58:54.223796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.689 [2024-10-01 15:58:54.223807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.689 [2024-10-01 15:58:54.223813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.689 [2024-10-01 15:58:54.223820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.689 [2024-10-01 15:58:54.223836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.689 [2024-10-01 15:58:54.234681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.689 [2024-10-01 15:58:54.234932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.689 [2024-10-01 15:58:54.234949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.689 [2024-10-01 15:58:54.234957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.689 [2024-10-01 15:58:54.234971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.689 [2024-10-01 15:58:54.234982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.689 [2024-10-01 15:58:54.234989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.689 [2024-10-01 15:58:54.234995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.689 [2024-10-01 15:58:54.235008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.689 [2024-10-01 15:58:54.246823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.689 [2024-10-01 15:58:54.247172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.689 [2024-10-01 15:58:54.247190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.689 [2024-10-01 15:58:54.247198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.689 [2024-10-01 15:58:54.247212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.689 [2024-10-01 15:58:54.247223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.689 [2024-10-01 15:58:54.247229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.689 [2024-10-01 15:58:54.247236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.689 [2024-10-01 15:58:54.247249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.689 [2024-10-01 15:58:54.256898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.689 [2024-10-01 15:58:54.257098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.689 [2024-10-01 15:58:54.257118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.689 [2024-10-01 15:58:54.257125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.689 [2024-10-01 15:58:54.257492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.689 [2024-10-01 15:58:54.257540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.689 [2024-10-01 15:58:54.257548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.689 [2024-10-01 15:58:54.257555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.689 [2024-10-01 15:58:54.257569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.689 [2024-10-01 15:58:54.266964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.689 [2024-10-01 15:58:54.267152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.689 [2024-10-01 15:58:54.267167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.689 [2024-10-01 15:58:54.267177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.689 [2024-10-01 15:58:54.268033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.689 [2024-10-01 15:58:54.268732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.689 [2024-10-01 15:58:54.268745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.689 [2024-10-01 15:58:54.268752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.689 [2024-10-01 15:58:54.269473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.689 [2024-10-01 15:58:54.277031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.689 [2024-10-01 15:58:54.277276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.689 [2024-10-01 15:58:54.277292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.689 [2024-10-01 15:58:54.277299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.689 [2024-10-01 15:58:54.278499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.689 [2024-10-01 15:58:54.278753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.689 [2024-10-01 15:58:54.278766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.689 [2024-10-01 15:58:54.278773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.689 [2024-10-01 15:58:54.278876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.689 [2024-10-01 15:58:54.287802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.689 [2024-10-01 15:58:54.287992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.689 [2024-10-01 15:58:54.288007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.689 [2024-10-01 15:58:54.288015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.689 [2024-10-01 15:58:54.288027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.689 [2024-10-01 15:58:54.288037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.689 [2024-10-01 15:58:54.288043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.689 [2024-10-01 15:58:54.288050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.689 [2024-10-01 15:58:54.288063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.689 [2024-10-01 15:58:54.298546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.689 [2024-10-01 15:58:54.298820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.689 [2024-10-01 15:58:54.298835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.689 [2024-10-01 15:58:54.298843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.689 [2024-10-01 15:58:54.298855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.689 [2024-10-01 15:58:54.298871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.689 [2024-10-01 15:58:54.298881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.689 [2024-10-01 15:58:54.298887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.689 [2024-10-01 15:58:54.298901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.690 [2024-10-01 15:58:54.308611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.690 [2024-10-01 15:58:54.308776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.690 [2024-10-01 15:58:54.308792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.690 [2024-10-01 15:58:54.308799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.690 [2024-10-01 15:58:54.308811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.690 [2024-10-01 15:58:54.308822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.690 [2024-10-01 15:58:54.308828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.690 [2024-10-01 15:58:54.308835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.690 [2024-10-01 15:58:54.308848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.690 [2024-10-01 15:58:54.318677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.690 [2024-10-01 15:58:54.319348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.690 [2024-10-01 15:58:54.319367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.690 [2024-10-01 15:58:54.319375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.690 [2024-10-01 15:58:54.320018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.690 [2024-10-01 15:58:54.320638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.690 [2024-10-01 15:58:54.320651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.690 [2024-10-01 15:58:54.320657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.690 [2024-10-01 15:58:54.320822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.690 [2024-10-01 15:58:54.328744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.690 [2024-10-01 15:58:54.328940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.690 [2024-10-01 15:58:54.328956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.690 [2024-10-01 15:58:54.328963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.690 [2024-10-01 15:58:54.328975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.690 [2024-10-01 15:58:54.328986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.690 [2024-10-01 15:58:54.328993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.690 [2024-10-01 15:58:54.328999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.690 [2024-10-01 15:58:54.329012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.690 [2024-10-01 15:58:54.341729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.690 [2024-10-01 15:58:54.342133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.690 [2024-10-01 15:58:54.342152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.690 [2024-10-01 15:58:54.342160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.690 [2024-10-01 15:58:54.342303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.690 [2024-10-01 15:58:54.342333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.690 [2024-10-01 15:58:54.342340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.690 [2024-10-01 15:58:54.342347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.690 [2024-10-01 15:58:54.342362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.690 [2024-10-01 15:58:54.352127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.690 [2024-10-01 15:58:54.352377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.690 [2024-10-01 15:58:54.352392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.690 [2024-10-01 15:58:54.352400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.690 [2024-10-01 15:58:54.352529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.690 [2024-10-01 15:58:54.352672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.690 [2024-10-01 15:58:54.352682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.690 [2024-10-01 15:58:54.352689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.690 [2024-10-01 15:58:54.352718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.690 [2024-10-01 15:58:54.363880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.690 [2024-10-01 15:58:54.364057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.691 [2024-10-01 15:58:54.364071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.691 [2024-10-01 15:58:54.364079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.691 [2024-10-01 15:58:54.364091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.691 [2024-10-01 15:58:54.364102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.691 [2024-10-01 15:58:54.364108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.691 [2024-10-01 15:58:54.364115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.691 [2024-10-01 15:58:54.364128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.691 [2024-10-01 15:58:54.376702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.691 [2024-10-01 15:58:54.376947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.691 [2024-10-01 15:58:54.376964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.691 [2024-10-01 15:58:54.376972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.691 [2024-10-01 15:58:54.376987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.691 [2024-10-01 15:58:54.376998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.691 [2024-10-01 15:58:54.377004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.691 [2024-10-01 15:58:54.377010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.691 [2024-10-01 15:58:54.377024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.691 [2024-10-01 15:58:54.388682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.691 [2024-10-01 15:58:54.389052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.691 [2024-10-01 15:58:54.389071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.691 [2024-10-01 15:58:54.389079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.691 [2024-10-01 15:58:54.389226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.691 [2024-10-01 15:58:54.389265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.691 [2024-10-01 15:58:54.389273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.691 [2024-10-01 15:58:54.389280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.691 [2024-10-01 15:58:54.389293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.691 [2024-10-01 15:58:54.400171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.691 [2024-10-01 15:58:54.400573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.691 [2024-10-01 15:58:54.400592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.691 [2024-10-01 15:58:54.400600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.691 [2024-10-01 15:58:54.400741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.691 [2024-10-01 15:58:54.400771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.691 [2024-10-01 15:58:54.400778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.691 [2024-10-01 15:58:54.400785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.691 [2024-10-01 15:58:54.400799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.691 [2024-10-01 15:58:54.410598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.691 [2024-10-01 15:58:54.410823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.691 [2024-10-01 15:58:54.410839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.691 [2024-10-01 15:58:54.410847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.691 [2024-10-01 15:58:54.410859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.691 [2024-10-01 15:58:54.410875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.691 [2024-10-01 15:58:54.410882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.691 [2024-10-01 15:58:54.410892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.691 [2024-10-01 15:58:54.410905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.691 [2024-10-01 15:58:54.423372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.691 [2024-10-01 15:58:54.423775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.691 [2024-10-01 15:58:54.423794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.691 [2024-10-01 15:58:54.423802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.691 [2024-10-01 15:58:54.423954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.691 [2024-10-01 15:58:54.423989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.691 [2024-10-01 15:58:54.423996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.691 [2024-10-01 15:58:54.424003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.691 [2024-10-01 15:58:54.424017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.691 [2024-10-01 15:58:54.434484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.691 [2024-10-01 15:58:54.434874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.691 [2024-10-01 15:58:54.434892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.691 [2024-10-01 15:58:54.434900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.691 [2024-10-01 15:58:54.435041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.691 [2024-10-01 15:58:54.435071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.691 [2024-10-01 15:58:54.435078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.691 [2024-10-01 15:58:54.435085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.691 [2024-10-01 15:58:54.435212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.691 [2024-10-01 15:58:54.445493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.691 [2024-10-01 15:58:54.445738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.691 [2024-10-01 15:58:54.445754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.691 [2024-10-01 15:58:54.445761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.691 [2024-10-01 15:58:54.445774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.691 [2024-10-01 15:58:54.445784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.691 [2024-10-01 15:58:54.445791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.691 [2024-10-01 15:58:54.445797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.691 [2024-10-01 15:58:54.445810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.691 [2024-10-01 15:58:54.458883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.691 [2024-10-01 15:58:54.459279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.691 [2024-10-01 15:58:54.459301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.691 [2024-10-01 15:58:54.459309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.691 [2024-10-01 15:58:54.459452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.691 [2024-10-01 15:58:54.459602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.691 [2024-10-01 15:58:54.459612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.691 [2024-10-01 15:58:54.459619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.691 [2024-10-01 15:58:54.459649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.691 [2024-10-01 15:58:54.469925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.691 [2024-10-01 15:58:54.470302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.691 [2024-10-01 15:58:54.470320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.692 [2024-10-01 15:58:54.470328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.692 [2024-10-01 15:58:54.470471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.692 [2024-10-01 15:58:54.470499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.692 [2024-10-01 15:58:54.470507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.692 [2024-10-01 15:58:54.470513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.692 [2024-10-01 15:58:54.470527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.692 [2024-10-01 15:58:54.481595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.692 [2024-10-01 15:58:54.481842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.692 [2024-10-01 15:58:54.481857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.692 [2024-10-01 15:58:54.481871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.692 [2024-10-01 15:58:54.481883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.692 [2024-10-01 15:58:54.481894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.692 [2024-10-01 15:58:54.481901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.692 [2024-10-01 15:58:54.481907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.692 [2024-10-01 15:58:54.481921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.692 [2024-10-01 15:58:54.494021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.692 [2024-10-01 15:58:54.494178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.692 [2024-10-01 15:58:54.494192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.692 [2024-10-01 15:58:54.494200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.692 [2024-10-01 15:58:54.494211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.692 [2024-10-01 15:58:54.494225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.692 [2024-10-01 15:58:54.494232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.692 [2024-10-01 15:58:54.494238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.692 [2024-10-01 15:58:54.494251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.692 [2024-10-01 15:58:54.505783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.692 [2024-10-01 15:58:54.506050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.692 [2024-10-01 15:58:54.506067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.692 [2024-10-01 15:58:54.506075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.692 [2024-10-01 15:58:54.506087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.692 [2024-10-01 15:58:54.506098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.692 [2024-10-01 15:58:54.506104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.692 [2024-10-01 15:58:54.506110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.692 [2024-10-01 15:58:54.506124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.692 [2024-10-01 15:58:54.517857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.692 [2024-10-01 15:58:54.518035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.692 [2024-10-01 15:58:54.518049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.692 [2024-10-01 15:58:54.518056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.692 [2024-10-01 15:58:54.518068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.692 [2024-10-01 15:58:54.518079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.692 [2024-10-01 15:58:54.518085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.692 [2024-10-01 15:58:54.518092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.692 [2024-10-01 15:58:54.518104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.692 [2024-10-01 15:58:54.530133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.692 [2024-10-01 15:58:54.530541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.692 [2024-10-01 15:58:54.530560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.692 [2024-10-01 15:58:54.530568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.692 [2024-10-01 15:58:54.530712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.692 [2024-10-01 15:58:54.530868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.692 [2024-10-01 15:58:54.530879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.692 [2024-10-01 15:58:54.530886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.692 [2024-10-01 15:58:54.530921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.692 [2024-10-01 15:58:54.540912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.692 [2024-10-01 15:58:54.541153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.692 [2024-10-01 15:58:54.541169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.692 [2024-10-01 15:58:54.541176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.692 [2024-10-01 15:58:54.541189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.692 [2024-10-01 15:58:54.541200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.692 [2024-10-01 15:58:54.541206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.692 [2024-10-01 15:58:54.541213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.692 [2024-10-01 15:58:54.541226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.692 [2024-10-01 15:58:54.552928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.692 [2024-10-01 15:58:54.553177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.692 [2024-10-01 15:58:54.553193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.692 [2024-10-01 15:58:54.553200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.692 [2024-10-01 15:58:54.553213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.692 [2024-10-01 15:58:54.553224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.692 [2024-10-01 15:58:54.553230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.692 [2024-10-01 15:58:54.553237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.692 [2024-10-01 15:58:54.553250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.692 [2024-10-01 15:58:54.564905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.692 [2024-10-01 15:58:54.565268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.692 [2024-10-01 15:58:54.565287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.692 [2024-10-01 15:58:54.565295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.692 [2024-10-01 15:58:54.565437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.692 [2024-10-01 15:58:54.565466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.692 [2024-10-01 15:58:54.565473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.692 [2024-10-01 15:58:54.565480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.692 [2024-10-01 15:58:54.565494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.692 [2024-10-01 15:58:54.576651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.692 [2024-10-01 15:58:54.576848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.692 [2024-10-01 15:58:54.576868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.692 [2024-10-01 15:58:54.576879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.692 [2024-10-01 15:58:54.576891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.692 [2024-10-01 15:58:54.576902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.692 [2024-10-01 15:58:54.576908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.692 [2024-10-01 15:58:54.576914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.692 [2024-10-01 15:58:54.576927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.692 [2024-10-01 15:58:54.588820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.693 [2024-10-01 15:58:54.589007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.693 [2024-10-01 15:58:54.589022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.693 [2024-10-01 15:58:54.589030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.693 [2024-10-01 15:58:54.589042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.693 [2024-10-01 15:58:54.589053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.693 [2024-10-01 15:58:54.589059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.693 [2024-10-01 15:58:54.589065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.693 [2024-10-01 15:58:54.589078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.693 [2024-10-01 15:58:54.601057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.693 [2024-10-01 15:58:54.601384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.693 [2024-10-01 15:58:54.601402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.693 [2024-10-01 15:58:54.601410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.693 [2024-10-01 15:58:54.601584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.693 [2024-10-01 15:58:54.601617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.693 [2024-10-01 15:58:54.601625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.693 [2024-10-01 15:58:54.601631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.693 [2024-10-01 15:58:54.601645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.693 [2024-10-01 15:58:54.611753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.693 [2024-10-01 15:58:54.612074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.693 [2024-10-01 15:58:54.612092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.693 [2024-10-01 15:58:54.612101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.693 [2024-10-01 15:58:54.612243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.693 [2024-10-01 15:58:54.612273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.693 [2024-10-01 15:58:54.612283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.693 [2024-10-01 15:58:54.612290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.693 [2024-10-01 15:58:54.612305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.693 [2024-10-01 15:58:54.623227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.693 [2024-10-01 15:58:54.623401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.693 [2024-10-01 15:58:54.623415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.693 [2024-10-01 15:58:54.623423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.693 [2024-10-01 15:58:54.623434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.693 [2024-10-01 15:58:54.623446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.693 [2024-10-01 15:58:54.623452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.693 [2024-10-01 15:58:54.623458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.693 [2024-10-01 15:58:54.623472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.693 [2024-10-01 15:58:54.635845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.693 [2024-10-01 15:58:54.636033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.693 [2024-10-01 15:58:54.636049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.693 [2024-10-01 15:58:54.636056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.693 [2024-10-01 15:58:54.636069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.693 [2024-10-01 15:58:54.636080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.693 [2024-10-01 15:58:54.636086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.693 [2024-10-01 15:58:54.636093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.693 [2024-10-01 15:58:54.636106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.693 [2024-10-01 15:58:54.648225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.693 [2024-10-01 15:58:54.648353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.693 [2024-10-01 15:58:54.648368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.693 [2024-10-01 15:58:54.648375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.693 [2024-10-01 15:58:54.648387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.693 [2024-10-01 15:58:54.648398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.693 [2024-10-01 15:58:54.648404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.693 [2024-10-01 15:58:54.648410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.693 [2024-10-01 15:58:54.648423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.693 [2024-10-01 15:58:54.660616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.693 [2024-10-01 15:58:54.660935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.693 [2024-10-01 15:58:54.660954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.693 [2024-10-01 15:58:54.660962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.693 [2024-10-01 15:58:54.661105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.693 [2024-10-01 15:58:54.661134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.693 [2024-10-01 15:58:54.661141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.693 [2024-10-01 15:58:54.661147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.693 [2024-10-01 15:58:54.661162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.693 [2024-10-01 15:58:54.671998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.693 [2024-10-01 15:58:54.672122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.693 [2024-10-01 15:58:54.672137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.693 [2024-10-01 15:58:54.672145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.693 [2024-10-01 15:58:54.672156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.693 [2024-10-01 15:58:54.672167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.693 [2024-10-01 15:58:54.672173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.693 [2024-10-01 15:58:54.672179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.693 [2024-10-01 15:58:54.672192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.693 [2024-10-01 15:58:54.682760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.693 [2024-10-01 15:58:54.682988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.693 [2024-10-01 15:58:54.683005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.693 [2024-10-01 15:58:54.683012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.693 [2024-10-01 15:58:54.683024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.694 [2024-10-01 15:58:54.683035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.694 [2024-10-01 15:58:54.683041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.694 [2024-10-01 15:58:54.683048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.694 [2024-10-01 15:58:54.683061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.694 [2024-10-01 15:58:54.694086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.694 [2024-10-01 15:58:54.694260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.694 [2024-10-01 15:58:54.694274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.694 [2024-10-01 15:58:54.694281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.694 [2024-10-01 15:58:54.694297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.694 [2024-10-01 15:58:54.694308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.694 [2024-10-01 15:58:54.694314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.694 [2024-10-01 15:58:54.694321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.694 [2024-10-01 15:58:54.694334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.694 [2024-10-01 15:58:54.705682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.694 [2024-10-01 15:58:54.706055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.694 [2024-10-01 15:58:54.706075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.694 [2024-10-01 15:58:54.706083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.694 [2024-10-01 15:58:54.706172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.694 [2024-10-01 15:58:54.706186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.694 [2024-10-01 15:58:54.706192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.694 [2024-10-01 15:58:54.706198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.694 [2024-10-01 15:58:54.706212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.694 [2024-10-01 15:58:54.717378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.694 [2024-10-01 15:58:54.717732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.694 [2024-10-01 15:58:54.717750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.694 [2024-10-01 15:58:54.717758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.694 [2024-10-01 15:58:54.717778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.694 [2024-10-01 15:58:54.717790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.694 [2024-10-01 15:58:54.717797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.694 [2024-10-01 15:58:54.717803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.694 [2024-10-01 15:58:54.717816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.694 [2024-10-01 15:58:54.728106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.694 [2024-10-01 15:58:54.728725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.694 [2024-10-01 15:58:54.728745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.694 [2024-10-01 15:58:54.728753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.694 [2024-10-01 15:58:54.728916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.694 [2024-10-01 15:58:54.728947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.694 [2024-10-01 15:58:54.728955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.694 [2024-10-01 15:58:54.728965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.694 [2024-10-01 15:58:54.728980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.694 [2024-10-01 15:58:54.739236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.694 [2024-10-01 15:58:54.739596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.694 [2024-10-01 15:58:54.739615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.694 [2024-10-01 15:58:54.739622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.694 [2024-10-01 15:58:54.739763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.694 [2024-10-01 15:58:54.739792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.694 [2024-10-01 15:58:54.739800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.694 [2024-10-01 15:58:54.739807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.694 [2024-10-01 15:58:54.739821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.694 [2024-10-01 15:58:54.750228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.694 [2024-10-01 15:58:54.750499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.694 [2024-10-01 15:58:54.750516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.694 [2024-10-01 15:58:54.750524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.694 [2024-10-01 15:58:54.750553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.694 [2024-10-01 15:58:54.750565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.694 [2024-10-01 15:58:54.750572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.694 [2024-10-01 15:58:54.750578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.694 [2024-10-01 15:58:54.750592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.694 [2024-10-01 15:58:54.761068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.694 [2024-10-01 15:58:54.761247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.694 [2024-10-01 15:58:54.761262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.694 [2024-10-01 15:58:54.761269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.694 [2024-10-01 15:58:54.761281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.694 [2024-10-01 15:58:54.761292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.694 [2024-10-01 15:58:54.761299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.694 [2024-10-01 15:58:54.761305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.694 [2024-10-01 15:58:54.761318] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.694 [2024-10-01 15:58:54.772325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.694 [2024-10-01 15:58:54.772472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.694 [2024-10-01 15:58:54.772493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.694 [2024-10-01 15:58:54.772500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.694 [2024-10-01 15:58:54.772512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.695 [2024-10-01 15:58:54.772523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.695 [2024-10-01 15:58:54.772529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.695 [2024-10-01 15:58:54.772535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.695 [2024-10-01 15:58:54.772549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.695 [2024-10-01 15:58:54.784003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.695 [2024-10-01 15:58:54.784257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.695 [2024-10-01 15:58:54.784273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.695 [2024-10-01 15:58:54.784280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.695 [2024-10-01 15:58:54.784292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.695 [2024-10-01 15:58:54.784303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.695 [2024-10-01 15:58:54.784309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.695 [2024-10-01 15:58:54.784316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.695 [2024-10-01 15:58:54.784328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.695 [2024-10-01 15:58:54.797351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.695 [2024-10-01 15:58:54.797706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.695 [2024-10-01 15:58:54.797724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.695 [2024-10-01 15:58:54.797732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.695 [2024-10-01 15:58:54.797879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.695 [2024-10-01 15:58:54.798022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.695 [2024-10-01 15:58:54.798031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.695 [2024-10-01 15:58:54.798038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.695 [2024-10-01 15:58:54.798068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.695 [2024-10-01 15:58:54.808198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.695 [2024-10-01 15:58:54.808342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.695 [2024-10-01 15:58:54.808357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.695 [2024-10-01 15:58:54.808365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.695 [2024-10-01 15:58:54.808376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.695 [2024-10-01 15:58:54.808391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.695 [2024-10-01 15:58:54.808397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.695 [2024-10-01 15:58:54.808403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.695 [2024-10-01 15:58:54.808417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.695 [2024-10-01 15:58:54.819462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.695 [2024-10-01 15:58:54.819718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.695 [2024-10-01 15:58:54.819735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.695 [2024-10-01 15:58:54.819743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.695 [2024-10-01 15:58:54.819756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.695 [2024-10-01 15:58:54.819766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.695 [2024-10-01 15:58:54.819773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.695 [2024-10-01 15:58:54.819779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.695 [2024-10-01 15:58:54.819792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.695 [2024-10-01 15:58:54.829688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.695 [2024-10-01 15:58:54.829799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.695 [2024-10-01 15:58:54.829814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.695 [2024-10-01 15:58:54.829822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.695 [2024-10-01 15:58:54.829834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.695 [2024-10-01 15:58:54.829846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.695 [2024-10-01 15:58:54.829852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.695 [2024-10-01 15:58:54.829859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.695 [2024-10-01 15:58:54.829879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.695 [2024-10-01 15:58:54.841800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.695 [2024-10-01 15:58:54.841983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.695 [2024-10-01 15:58:54.841999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.695 [2024-10-01 15:58:54.842007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.695 [2024-10-01 15:58:54.842019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.695 [2024-10-01 15:58:54.842030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.695 [2024-10-01 15:58:54.842036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.695 [2024-10-01 15:58:54.842043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.695 [2024-10-01 15:58:54.842060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.695 [2024-10-01 15:58:54.852155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.695 [2024-10-01 15:58:54.852280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.695 [2024-10-01 15:58:54.852295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.695 [2024-10-01 15:58:54.852303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.695 [2024-10-01 15:58:54.852315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.695 [2024-10-01 15:58:54.852326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.696 [2024-10-01 15:58:54.852332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.696 [2024-10-01 15:58:54.852338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.696 [2024-10-01 15:58:54.852352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.696 [2024-10-01 15:58:54.863770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.696 [2024-10-01 15:58:54.863924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.696 [2024-10-01 15:58:54.863939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.696 [2024-10-01 15:58:54.863947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.696 [2024-10-01 15:58:54.863960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.696 [2024-10-01 15:58:54.863970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.696 [2024-10-01 15:58:54.863977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.696 [2024-10-01 15:58:54.863983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.696 [2024-10-01 15:58:54.863996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.696 [2024-10-01 15:58:54.874693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.696 [2024-10-01 15:58:54.874961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.696 [2024-10-01 15:58:54.874978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.696 [2024-10-01 15:58:54.874986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.696 [2024-10-01 15:58:54.874998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.696 [2024-10-01 15:58:54.875009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.696 [2024-10-01 15:58:54.875015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.696 [2024-10-01 15:58:54.875022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.696 [2024-10-01 15:58:54.875035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.696 [2024-10-01 15:58:54.886676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.696 [2024-10-01 15:58:54.887016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.696 [2024-10-01 15:58:54.887034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.696 [2024-10-01 15:58:54.887046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.696 [2024-10-01 15:58:54.887327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.696 [2024-10-01 15:58:54.887360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.696 [2024-10-01 15:58:54.887368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.696 [2024-10-01 15:58:54.887375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.696 [2024-10-01 15:58:54.887389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.696 [2024-10-01 15:58:54.898713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.696 [2024-10-01 15:58:54.899108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.696 [2024-10-01 15:58:54.899127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.696 [2024-10-01 15:58:54.899135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.696 [2024-10-01 15:58:54.899287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.696 [2024-10-01 15:58:54.899316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.696 [2024-10-01 15:58:54.899323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.696 [2024-10-01 15:58:54.899330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.696 [2024-10-01 15:58:54.899344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.696 [2024-10-01 15:58:54.909005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.696 [2024-10-01 15:58:54.909179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.696 [2024-10-01 15:58:54.909193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.696 [2024-10-01 15:58:54.909201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.696 [2024-10-01 15:58:54.909213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.696 [2024-10-01 15:58:54.909224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.696 [2024-10-01 15:58:54.909230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.696 [2024-10-01 15:58:54.909237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.696 [2024-10-01 15:58:54.909249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.696 [2024-10-01 15:58:54.920252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.696 [2024-10-01 15:58:54.920450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.696 [2024-10-01 15:58:54.920472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.696 [2024-10-01 15:58:54.920479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.696 [2024-10-01 15:58:54.920609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.696 [2024-10-01 15:58:54.920638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.696 [2024-10-01 15:58:54.920649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.696 [2024-10-01 15:58:54.920656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.696 [2024-10-01 15:58:54.920669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.696 [2024-10-01 15:58:54.930982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.696 [2024-10-01 15:58:54.931110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.696 [2024-10-01 15:58:54.931124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.696 [2024-10-01 15:58:54.931131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.696 [2024-10-01 15:58:54.931143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.696 [2024-10-01 15:58:54.931154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.696 [2024-10-01 15:58:54.931160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.696 [2024-10-01 15:58:54.931166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.696 [2024-10-01 15:58:54.931180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.696 [2024-10-01 15:58:54.941887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.696 [2024-10-01 15:58:54.941981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.696 [2024-10-01 15:58:54.941996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.696 [2024-10-01 15:58:54.942003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.696 [2024-10-01 15:58:54.942015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.696 [2024-10-01 15:58:54.942026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.696 [2024-10-01 15:58:54.942032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.696 [2024-10-01 15:58:54.942039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.696 [2024-10-01 15:58:54.942052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.696 [2024-10-01 15:58:54.953013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.696 [2024-10-01 15:58:54.953159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.696 [2024-10-01 15:58:54.953175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.696 [2024-10-01 15:58:54.953183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.696 [2024-10-01 15:58:54.953318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.697 [2024-10-01 15:58:54.953348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.697 [2024-10-01 15:58:54.953356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.697 [2024-10-01 15:58:54.953362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.697 [2024-10-01 15:58:54.953376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.697 [2024-10-01 15:58:54.964141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.697 [2024-10-01 15:58:54.964264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.697 [2024-10-01 15:58:54.964279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.697 [2024-10-01 15:58:54.964287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.697 [2024-10-01 15:58:54.964299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.697 [2024-10-01 15:58:54.964309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.697 [2024-10-01 15:58:54.964315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.697 [2024-10-01 15:58:54.964322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.697 [2024-10-01 15:58:54.964335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.697 11244.33 IOPS, 43.92 MiB/s [2024-10-01 15:58:54.975839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.697 [2024-10-01 15:58:54.976039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.697 [2024-10-01 15:58:54.976055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.697 [2024-10-01 15:58:54.976064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.697 [2024-10-01 15:58:54.976242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.697 [2024-10-01 15:58:54.976345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.697 [2024-10-01 15:58:54.976356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.697 [2024-10-01 15:58:54.976363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.697 [2024-10-01 15:58:54.976386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.697 [2024-10-01 15:58:54.986379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.697 [2024-10-01 15:58:54.986652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.697 [2024-10-01 15:58:54.986669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.697 [2024-10-01 15:58:54.986677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.697 [2024-10-01 15:58:54.986698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.697 [2024-10-01 15:58:54.986709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.697 [2024-10-01 15:58:54.986716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.697 [2024-10-01 15:58:54.986722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.697 [2024-10-01 15:58:54.986737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.697 [2024-10-01 15:58:54.996447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.697 [2024-10-01 15:58:54.996687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.697 [2024-10-01 15:58:54.996703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.697 [2024-10-01 15:58:54.996711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.697 [2024-10-01 15:58:54.996845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.697 [2024-10-01 15:58:54.996881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.697 [2024-10-01 15:58:54.996889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.697 [2024-10-01 15:58:54.996895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.697 [2024-10-01 15:58:54.996909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.697 [2024-10-01 15:58:55.006588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.697 [2024-10-01 15:58:55.006831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.697 [2024-10-01 15:58:55.006847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.697 [2024-10-01 15:58:55.006854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.697 [2024-10-01 15:58:55.006872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.697 [2024-10-01 15:58:55.006883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.697 [2024-10-01 15:58:55.006890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.697 [2024-10-01 15:58:55.006896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.697 [2024-10-01 15:58:55.006909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.697 [2024-10-01 15:58:55.018071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.697 [2024-10-01 15:58:55.018402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.697 [2024-10-01 15:58:55.018420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.697 [2024-10-01 15:58:55.018428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.697 [2024-10-01 15:58:55.018570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.697 [2024-10-01 15:58:55.018599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.697 [2024-10-01 15:58:55.018607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.697 [2024-10-01 15:58:55.018614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.697 [2024-10-01 15:58:55.018627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.697 [2024-10-01 15:58:55.028730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.697 [2024-10-01 15:58:55.028906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.697 [2024-10-01 15:58:55.028922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.697 [2024-10-01 15:58:55.028930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.697 [2024-10-01 15:58:55.028942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.697 [2024-10-01 15:58:55.028953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.697 [2024-10-01 15:58:55.028959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.697 [2024-10-01 15:58:55.028970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.697 [2024-10-01 15:58:55.028984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.697 [2024-10-01 15:58:55.040055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.697 [2024-10-01 15:58:55.040255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.697 [2024-10-01 15:58:55.040271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.697 [2024-10-01 15:58:55.040279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.697 [2024-10-01 15:58:55.040291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.697 [2024-10-01 15:58:55.040302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.697 [2024-10-01 15:58:55.040309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.697 [2024-10-01 15:58:55.040315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.697 [2024-10-01 15:58:55.040328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.697 [2024-10-01 15:58:55.051024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.697 [2024-10-01 15:58:55.051195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.697 [2024-10-01 15:58:55.051210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.697 [2024-10-01 15:58:55.051217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.697 [2024-10-01 15:58:55.051229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.698 [2024-10-01 15:58:55.051239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.698 [2024-10-01 15:58:55.051246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.698 [2024-10-01 15:58:55.051253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.698 [2024-10-01 15:58:55.051266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.698 [2024-10-01 15:58:55.061528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.698 [2024-10-01 15:58:55.061770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.698 [2024-10-01 15:58:55.061785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.698 [2024-10-01 15:58:55.061792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.698 [2024-10-01 15:58:55.061805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.698 [2024-10-01 15:58:55.061816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.698 [2024-10-01 15:58:55.061822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.698 [2024-10-01 15:58:55.061828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.698 [2024-10-01 15:58:55.061841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.698 [2024-10-01 15:58:55.073391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.698 [2024-10-01 15:58:55.073643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.698 [2024-10-01 15:58:55.073657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.698 [2024-10-01 15:58:55.073665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.698 [2024-10-01 15:58:55.073677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.698 [2024-10-01 15:58:55.073688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.698 [2024-10-01 15:58:55.073694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.698 [2024-10-01 15:58:55.073701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.698 [2024-10-01 15:58:55.073714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.698 [2024-10-01 15:58:55.085086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.698 [2024-10-01 15:58:55.085279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.698 [2024-10-01 15:58:55.085294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.698 [2024-10-01 15:58:55.085302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.698 [2024-10-01 15:58:55.085313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.698 [2024-10-01 15:58:55.085324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.698 [2024-10-01 15:58:55.085330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.698 [2024-10-01 15:58:55.085336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.698 [2024-10-01 15:58:55.085349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.698 [2024-10-01 15:58:55.096037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.698 [2024-10-01 15:58:55.096216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.698 [2024-10-01 15:58:55.096232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.698 [2024-10-01 15:58:55.096240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.698 [2024-10-01 15:58:55.096575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.698 [2024-10-01 15:58:55.096733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.698 [2024-10-01 15:58:55.096743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.698 [2024-10-01 15:58:55.096750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.698 [2024-10-01 15:58:55.096781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.698 [2024-10-01 15:58:55.106896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.698 [2024-10-01 15:58:55.107082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.698 [2024-10-01 15:58:55.107097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.698 [2024-10-01 15:58:55.107104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.698 [2024-10-01 15:58:55.107234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.698 [2024-10-01 15:58:55.107268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.698 [2024-10-01 15:58:55.107276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.698 [2024-10-01 15:58:55.107282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.698 [2024-10-01 15:58:55.107296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.698 [2024-10-01 15:58:55.118039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.698 [2024-10-01 15:58:55.118156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.698 [2024-10-01 15:58:55.118170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.698 [2024-10-01 15:58:55.118178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.698 [2024-10-01 15:58:55.118623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.698 [2024-10-01 15:58:55.118793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.698 [2024-10-01 15:58:55.118804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.698 [2024-10-01 15:58:55.118811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.698 [2024-10-01 15:58:55.118841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.698 [2024-10-01 15:58:55.129490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.698 [2024-10-01 15:58:55.129654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.698 [2024-10-01 15:58:55.129668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.698 [2024-10-01 15:58:55.129675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.698 [2024-10-01 15:58:55.129687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.698 [2024-10-01 15:58:55.129698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.698 [2024-10-01 15:58:55.129705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.698 [2024-10-01 15:58:55.129711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.698 [2024-10-01 15:58:55.129724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.698 [2024-10-01 15:58:55.140713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.698 [2024-10-01 15:58:55.140943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.698 [2024-10-01 15:58:55.140960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.698 [2024-10-01 15:58:55.140967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.698 [2024-10-01 15:58:55.140979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.698 [2024-10-01 15:58:55.140990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.698 [2024-10-01 15:58:55.140997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.698 [2024-10-01 15:58:55.141003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.698 [2024-10-01 15:58:55.141021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.698 [2024-10-01 15:58:55.151962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.698 [2024-10-01 15:58:55.152512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.698 [2024-10-01 15:58:55.152532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.698 [2024-10-01 15:58:55.152540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.699 [2024-10-01 15:58:55.152800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.699 [2024-10-01 15:58:55.152959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.699 [2024-10-01 15:58:55.152970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.699 [2024-10-01 15:58:55.152977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.699 [2024-10-01 15:58:55.153008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.699 [2024-10-01 15:58:55.164291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.699 [2024-10-01 15:58:55.164513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.699 [2024-10-01 15:58:55.164528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.699 [2024-10-01 15:58:55.164537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.699 [2024-10-01 15:58:55.164548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.699 [2024-10-01 15:58:55.164559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.699 [2024-10-01 15:58:55.164565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.699 [2024-10-01 15:58:55.164572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.699 [2024-10-01 15:58:55.164585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.699 [2024-10-01 15:58:55.175107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.699 [2024-10-01 15:58:55.175265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.699 [2024-10-01 15:58:55.175279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.699 [2024-10-01 15:58:55.175287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.699 [2024-10-01 15:58:55.175298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.699 [2024-10-01 15:58:55.175309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.699 [2024-10-01 15:58:55.175315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.699 [2024-10-01 15:58:55.175322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.699 [2024-10-01 15:58:55.175335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.699 [2024-10-01 15:58:55.186476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.699 [2024-10-01 15:58:55.186738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.699 [2024-10-01 15:58:55.186754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.699 [2024-10-01 15:58:55.186765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.699 [2024-10-01 15:58:55.187078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.699 [2024-10-01 15:58:55.187233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.699 [2024-10-01 15:58:55.187244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.699 [2024-10-01 15:58:55.187251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.699 [2024-10-01 15:58:55.187394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.699 [2024-10-01 15:58:55.198304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.699 [2024-10-01 15:58:55.198554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.699 [2024-10-01 15:58:55.198570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.699 [2024-10-01 15:58:55.198578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.699 [2024-10-01 15:58:55.198590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.699 [2024-10-01 15:58:55.198608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.699 [2024-10-01 15:58:55.198615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.699 [2024-10-01 15:58:55.198621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.699 [2024-10-01 15:58:55.198634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.699 [2024-10-01 15:58:55.209949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.699 [2024-10-01 15:58:55.210204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.699 [2024-10-01 15:58:55.210219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.699 [2024-10-01 15:58:55.210227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.699 [2024-10-01 15:58:55.210239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.699 [2024-10-01 15:58:55.210249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.699 [2024-10-01 15:58:55.210256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.699 [2024-10-01 15:58:55.210262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.699 [2024-10-01 15:58:55.210275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.699 [2024-10-01 15:58:55.221398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.699 [2024-10-01 15:58:55.221643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.699 [2024-10-01 15:58:55.221660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.699 [2024-10-01 15:58:55.221667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.699 [2024-10-01 15:58:55.221679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.699 [2024-10-01 15:58:55.221690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.699 [2024-10-01 15:58:55.221700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.699 [2024-10-01 15:58:55.221706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.699 [2024-10-01 15:58:55.221720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.699 [2024-10-01 15:58:55.232668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.699 [2024-10-01 15:58:55.232982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.699 [2024-10-01 15:58:55.233000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.699 [2024-10-01 15:58:55.233008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.699 [2024-10-01 15:58:55.233151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.699 [2024-10-01 15:58:55.233180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.699 [2024-10-01 15:58:55.233187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.699 [2024-10-01 15:58:55.233194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.699 [2024-10-01 15:58:55.233208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.699 [2024-10-01 15:58:55.243266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.699 [2024-10-01 15:58:55.243442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.699 [2024-10-01 15:58:55.243457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.699 [2024-10-01 15:58:55.243465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.699 [2024-10-01 15:58:55.243476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.699 [2024-10-01 15:58:55.243487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.699 [2024-10-01 15:58:55.243493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.699 [2024-10-01 15:58:55.243499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.699 [2024-10-01 15:58:55.243512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.700 [2024-10-01 15:58:55.254771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.700 [2024-10-01 15:58:55.255021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.700 [2024-10-01 15:58:55.255039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.700 [2024-10-01 15:58:55.255047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.700 [2024-10-01 15:58:55.255059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.700 [2024-10-01 15:58:55.255070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.700 [2024-10-01 15:58:55.255077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.700 [2024-10-01 15:58:55.255083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.700 [2024-10-01 15:58:55.255097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.700 [2024-10-01 15:58:55.266007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.700 [2024-10-01 15:58:55.266120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.700 [2024-10-01 15:58:55.266134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.700 [2024-10-01 15:58:55.266142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.700 [2024-10-01 15:58:55.266153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.700 [2024-10-01 15:58:55.266164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.700 [2024-10-01 15:58:55.266170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.700 [2024-10-01 15:58:55.266177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.700 [2024-10-01 15:58:55.266190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.700 [2024-10-01 15:58:55.276073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.700 [2024-10-01 15:58:55.276317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.700 [2024-10-01 15:58:55.276332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.700 [2024-10-01 15:58:55.276340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.700 [2024-10-01 15:58:55.276352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.700 [2024-10-01 15:58:55.276363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.700 [2024-10-01 15:58:55.276370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.700 [2024-10-01 15:58:55.276376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.700 [2024-10-01 15:58:55.276389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.700 [2024-10-01 15:58:55.286882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.700 [2024-10-01 15:58:55.287103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.700 [2024-10-01 15:58:55.287118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.700 [2024-10-01 15:58:55.287126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.700 [2024-10-01 15:58:55.287310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.700 [2024-10-01 15:58:55.287462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.700 [2024-10-01 15:58:55.287472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.700 [2024-10-01 15:58:55.287479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.700 [2024-10-01 15:58:55.287509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.700 [2024-10-01 15:58:55.298379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.700 [2024-10-01 15:58:55.298577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.700 [2024-10-01 15:58:55.298594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.700 [2024-10-01 15:58:55.298601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.700 [2024-10-01 15:58:55.298617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.700 [2024-10-01 15:58:55.298629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.700 [2024-10-01 15:58:55.298635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.700 [2024-10-01 15:58:55.298642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.700 [2024-10-01 15:58:55.298656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.700 [2024-10-01 15:58:55.309012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.700 [2024-10-01 15:58:55.309259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.700 [2024-10-01 15:58:55.309274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.700 [2024-10-01 15:58:55.309282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.700 [2024-10-01 15:58:55.309971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.700 [2024-10-01 15:58:55.310441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.700 [2024-10-01 15:58:55.310454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.700 [2024-10-01 15:58:55.310461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.700 [2024-10-01 15:58:55.310625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.700 [2024-10-01 15:58:55.321318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.700 [2024-10-01 15:58:55.321642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.700 [2024-10-01 15:58:55.321660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.700 [2024-10-01 15:58:55.321668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.700 [2024-10-01 15:58:55.321811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.700 [2024-10-01 15:58:55.321840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.700 [2024-10-01 15:58:55.321847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.700 [2024-10-01 15:58:55.321854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.700 [2024-10-01 15:58:55.321874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.700 [2024-10-01 15:58:55.331705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.700 [2024-10-01 15:58:55.331903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.700 [2024-10-01 15:58:55.331919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.700 [2024-10-01 15:58:55.331927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.700 [2024-10-01 15:58:55.332056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.700 [2024-10-01 15:58:55.332086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.700 [2024-10-01 15:58:55.332093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.700 [2024-10-01 15:58:55.332114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.700 [2024-10-01 15:58:55.332127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.700 [2024-10-01 15:58:55.343059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.700 [2024-10-01 15:58:55.343254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.700 [2024-10-01 15:58:55.343268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.700 [2024-10-01 15:58:55.343276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.700 [2024-10-01 15:58:55.343287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.700 [2024-10-01 15:58:55.343298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.700 [2024-10-01 15:58:55.343305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.700 [2024-10-01 15:58:55.343311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.701 [2024-10-01 15:58:55.343325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.701 [2024-10-01 15:58:55.354331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.701 [2024-10-01 15:58:55.354452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.701 [2024-10-01 15:58:55.354466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.701 [2024-10-01 15:58:55.354474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.701 [2024-10-01 15:58:55.354486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.701 [2024-10-01 15:58:55.354496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.701 [2024-10-01 15:58:55.354503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.701 [2024-10-01 15:58:55.354509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.701 [2024-10-01 15:58:55.354522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.701 [2024-10-01 15:58:55.366947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.701 [2024-10-01 15:58:55.367323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.701 [2024-10-01 15:58:55.367342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.701 [2024-10-01 15:58:55.367350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.701 [2024-10-01 15:58:55.367495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.701 [2024-10-01 15:58:55.367525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.701 [2024-10-01 15:58:55.367533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.701 [2024-10-01 15:58:55.367540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.701 [2024-10-01 15:58:55.367554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.701 [2024-10-01 15:58:55.377877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.701 [2024-10-01 15:58:55.378124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.701 [2024-10-01 15:58:55.378142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.701 [2024-10-01 15:58:55.378150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.701 [2024-10-01 15:58:55.378292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.701 [2024-10-01 15:58:55.378322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.701 [2024-10-01 15:58:55.378329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.701 [2024-10-01 15:58:55.378336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.701 [2024-10-01 15:58:55.378350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.701 [2024-10-01 15:58:55.389206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.701 [2024-10-01 15:58:55.389317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.701 [2024-10-01 15:58:55.389331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.701 [2024-10-01 15:58:55.389339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.701 [2024-10-01 15:58:55.389350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.701 [2024-10-01 15:58:55.389360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.701 [2024-10-01 15:58:55.389367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.701 [2024-10-01 15:58:55.389373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.701 [2024-10-01 15:58:55.389386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.701 [2024-10-01 15:58:55.400295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.701 [2024-10-01 15:58:55.400562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.701 [2024-10-01 15:58:55.400578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.701 [2024-10-01 15:58:55.400585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.701 [2024-10-01 15:58:55.400597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.701 [2024-10-01 15:58:55.400608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.701 [2024-10-01 15:58:55.400614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.701 [2024-10-01 15:58:55.400621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.701 [2024-10-01 15:58:55.400634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.701 [2024-10-01 15:58:55.411443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.701 [2024-10-01 15:58:55.411825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.701 [2024-10-01 15:58:55.411843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.701 [2024-10-01 15:58:55.411851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.701 [2024-10-01 15:58:55.412182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.701 [2024-10-01 15:58:55.412339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.701 [2024-10-01 15:58:55.412350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.701 [2024-10-01 15:58:55.412358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.701 [2024-10-01 15:58:55.412500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.701 [2024-10-01 15:58:55.424219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.701 [2024-10-01 15:58:55.424406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.701 [2024-10-01 15:58:55.424420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.701 [2024-10-01 15:58:55.424427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.701 [2024-10-01 15:58:55.424439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.701 [2024-10-01 15:58:55.424451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.701 [2024-10-01 15:58:55.424457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.701 [2024-10-01 15:58:55.424464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.701 [2024-10-01 15:58:55.424477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.701 [2024-10-01 15:58:55.435093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.701 [2024-10-01 15:58:55.435336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.701 [2024-10-01 15:58:55.435351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.701 [2024-10-01 15:58:55.435359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.701 [2024-10-01 15:58:55.435371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.701 [2024-10-01 15:58:55.435382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.701 [2024-10-01 15:58:55.435389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.701 [2024-10-01 15:58:55.435395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.701 [2024-10-01 15:58:55.435408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.701 [2024-10-01 15:58:55.445690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.701 [2024-10-01 15:58:55.445962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.701 [2024-10-01 15:58:55.445978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.701 [2024-10-01 15:58:55.445986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.702 [2024-10-01 15:58:55.445998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.702 [2024-10-01 15:58:55.446009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.702 [2024-10-01 15:58:55.446016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.702 [2024-10-01 15:58:55.446022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.702 [2024-10-01 15:58:55.446039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.702 [2024-10-01 15:58:55.456819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.702 [2024-10-01 15:58:55.457011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.702 [2024-10-01 15:58:55.457027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.702 [2024-10-01 15:58:55.457035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.702 [2024-10-01 15:58:55.457371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.702 [2024-10-01 15:58:55.457528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.702 [2024-10-01 15:58:55.457539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.702 [2024-10-01 15:58:55.457545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.702 [2024-10-01 15:58:55.457689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.702 [2024-10-01 15:58:55.468175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.702 [2024-10-01 15:58:55.468534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.702 [2024-10-01 15:58:55.468553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.702 [2024-10-01 15:58:55.468561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.702 [2024-10-01 15:58:55.468705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.702 [2024-10-01 15:58:55.468735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.702 [2024-10-01 15:58:55.468742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.702 [2024-10-01 15:58:55.468749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.702 [2024-10-01 15:58:55.468763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.702 [2024-10-01 15:58:55.478928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.702 [2024-10-01 15:58:55.479114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.702 [2024-10-01 15:58:55.479129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.702 [2024-10-01 15:58:55.479137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.702 [2024-10-01 15:58:55.479266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.702 [2024-10-01 15:58:55.479297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.702 [2024-10-01 15:58:55.479305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.702 [2024-10-01 15:58:55.479311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.702 [2024-10-01 15:58:55.479325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.702 [2024-10-01 15:58:55.490877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.702 [2024-10-01 15:58:55.491102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.702 [2024-10-01 15:58:55.491121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.702 [2024-10-01 15:58:55.491128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.702 [2024-10-01 15:58:55.491140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.702 [2024-10-01 15:58:55.491151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.702 [2024-10-01 15:58:55.491157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.702 [2024-10-01 15:58:55.491164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.702 [2024-10-01 15:58:55.491176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.702 [2024-10-01 15:58:55.503231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.702 [2024-10-01 15:58:55.503410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.702 [2024-10-01 15:58:55.503425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.702 [2024-10-01 15:58:55.503432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.702 [2024-10-01 15:58:55.503444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.702 [2024-10-01 15:58:55.503455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.702 [2024-10-01 15:58:55.503462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.702 [2024-10-01 15:58:55.503469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.702 [2024-10-01 15:58:55.503482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.702 [2024-10-01 15:58:55.514551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.702 [2024-10-01 15:58:55.514877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.702 [2024-10-01 15:58:55.514895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.702 [2024-10-01 15:58:55.514903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.702 [2024-10-01 15:58:55.515248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.702 [2024-10-01 15:58:55.515405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.702 [2024-10-01 15:58:55.515416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.702 [2024-10-01 15:58:55.515422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.702 [2024-10-01 15:58:55.515453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.702 [2024-10-01 15:58:55.527370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.702 [2024-10-01 15:58:55.527544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.702 [2024-10-01 15:58:55.527559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.702 [2024-10-01 15:58:55.527566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.702 [2024-10-01 15:58:55.527578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.702 [2024-10-01 15:58:55.527592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.702 [2024-10-01 15:58:55.527598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.702 [2024-10-01 15:58:55.527605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.702 [2024-10-01 15:58:55.527618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.702 [2024-10-01 15:58:55.538360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.702 [2024-10-01 15:58:55.538583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.702 [2024-10-01 15:58:55.538599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.702 [2024-10-01 15:58:55.538606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.702 [2024-10-01 15:58:55.538618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.702 [2024-10-01 15:58:55.538628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.702 [2024-10-01 15:58:55.538635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.702 [2024-10-01 15:58:55.538641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.702 [2024-10-01 15:58:55.538654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.702 [2024-10-01 15:58:55.549720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.703 [2024-10-01 15:58:55.549974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.703 [2024-10-01 15:58:55.549992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.703 [2024-10-01 15:58:55.550000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.703 [2024-10-01 15:58:55.550140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.703 [2024-10-01 15:58:55.550169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.703 [2024-10-01 15:58:55.550176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.703 [2024-10-01 15:58:55.550183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.703 [2024-10-01 15:58:55.550197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.703 [2024-10-01 15:58:55.560831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.703 [2024-10-01 15:58:55.561237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.703 [2024-10-01 15:58:55.561256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.703 [2024-10-01 15:58:55.561264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.703 [2024-10-01 15:58:55.561408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.703 [2024-10-01 15:58:55.561439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.703 [2024-10-01 15:58:55.561447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.703 [2024-10-01 15:58:55.561453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.703 [2024-10-01 15:58:55.561467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.703 [2024-10-01 15:58:55.572032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.703 [2024-10-01 15:58:55.572333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.703 [2024-10-01 15:58:55.572350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.703 [2024-10-01 15:58:55.572358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.703 [2024-10-01 15:58:55.572386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.703 [2024-10-01 15:58:55.572398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.703 [2024-10-01 15:58:55.572404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.703 [2024-10-01 15:58:55.572410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.703 [2024-10-01 15:58:55.572540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.703 [2024-10-01 15:58:55.584384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.703 [2024-10-01 15:58:55.584554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.703 [2024-10-01 15:58:55.584568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.703 [2024-10-01 15:58:55.584575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.703 [2024-10-01 15:58:55.584594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.703 [2024-10-01 15:58:55.584605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.703 [2024-10-01 15:58:55.584611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.703 [2024-10-01 15:58:55.584618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.703 [2024-10-01 15:58:55.584631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.703 [2024-10-01 15:58:55.595938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.703 [2024-10-01 15:58:55.596160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.703 [2024-10-01 15:58:55.596175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.703 [2024-10-01 15:58:55.596183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.703 [2024-10-01 15:58:55.596194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.703 [2024-10-01 15:58:55.596205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.703 [2024-10-01 15:58:55.596211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.703 [2024-10-01 15:58:55.596217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.703 [2024-10-01 15:58:55.596230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.703 [2024-10-01 15:58:55.606835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.703 [2024-10-01 15:58:55.607080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.703 [2024-10-01 15:58:55.607095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.703 [2024-10-01 15:58:55.607106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.703 [2024-10-01 15:58:55.607118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.703 [2024-10-01 15:58:55.607128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.703 [2024-10-01 15:58:55.607135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.703 [2024-10-01 15:58:55.607141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.703 [2024-10-01 15:58:55.607154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.703 [2024-10-01 15:58:55.619061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.703 [2024-10-01 15:58:55.619420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.703 [2024-10-01 15:58:55.619438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.703 [2024-10-01 15:58:55.619446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.703 [2024-10-01 15:58:55.620006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.703 [2024-10-01 15:58:55.620217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.703 [2024-10-01 15:58:55.620228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.703 [2024-10-01 15:58:55.620235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.703 [2024-10-01 15:58:55.620382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.703 [2024-10-01 15:58:55.629128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.703 [2024-10-01 15:58:55.629350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.703 [2024-10-01 15:58:55.629365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.703 [2024-10-01 15:58:55.629372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.703 [2024-10-01 15:58:55.629385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.703 [2024-10-01 15:58:55.629396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.703 [2024-10-01 15:58:55.629402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.703 [2024-10-01 15:58:55.629409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.703 [2024-10-01 15:58:55.629422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.703 [2024-10-01 15:58:55.639645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.703 [2024-10-01 15:58:55.639893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.703 [2024-10-01 15:58:55.639911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.703 [2024-10-01 15:58:55.639919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.704 [2024-10-01 15:58:55.640049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.704 [2024-10-01 15:58:55.640088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.704 [2024-10-01 15:58:55.640096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.704 [2024-10-01 15:58:55.640106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.704 [2024-10-01 15:58:55.640120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.704 [2024-10-01 15:58:55.649860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.704 [2024-10-01 15:58:55.650108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.704 [2024-10-01 15:58:55.650124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.704 [2024-10-01 15:58:55.650131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.704 [2024-10-01 15:58:55.650527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.704 [2024-10-01 15:58:55.650577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.704 [2024-10-01 15:58:55.650585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.704 [2024-10-01 15:58:55.650591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.704 [2024-10-01 15:58:55.650605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.704 [2024-10-01 15:58:55.661166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.704 [2024-10-01 15:58:55.661727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.704 [2024-10-01 15:58:55.661747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.704 [2024-10-01 15:58:55.661754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.704 [2024-10-01 15:58:55.662029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.704 [2024-10-01 15:58:55.662071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.704 [2024-10-01 15:58:55.662079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.704 [2024-10-01 15:58:55.662085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.704 [2024-10-01 15:58:55.662099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.704 [2024-10-01 15:58:55.672421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.704 [2024-10-01 15:58:55.672792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.704 [2024-10-01 15:58:55.672810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.704 [2024-10-01 15:58:55.672818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.704 [2024-10-01 15:58:55.672974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.704 [2024-10-01 15:58:55.673005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.704 [2024-10-01 15:58:55.673012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.704 [2024-10-01 15:58:55.673019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.704 [2024-10-01 15:58:55.673033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.704 [2024-10-01 15:58:55.683182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.704 [2024-10-01 15:58:55.683381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.704 [2024-10-01 15:58:55.683395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.704 [2024-10-01 15:58:55.683403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.704 [2024-10-01 15:58:55.683415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.704 [2024-10-01 15:58:55.683426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.704 [2024-10-01 15:58:55.683432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.704 [2024-10-01 15:58:55.683438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.704 [2024-10-01 15:58:55.683605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.704 [2024-10-01 15:58:55.694160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.704 [2024-10-01 15:58:55.694398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.704 [2024-10-01 15:58:55.694414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.704 [2024-10-01 15:58:55.694422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.704 [2024-10-01 15:58:55.694434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.704 [2024-10-01 15:58:55.694445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.704 [2024-10-01 15:58:55.694451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.704 [2024-10-01 15:58:55.694458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.704 [2024-10-01 15:58:55.694470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.704 [2024-10-01 15:58:55.704225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.704 [2024-10-01 15:58:55.704447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.704 [2024-10-01 15:58:55.704462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.704 [2024-10-01 15:58:55.704469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.704 [2024-10-01 15:58:55.704481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.704 [2024-10-01 15:58:55.704492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.704 [2024-10-01 15:58:55.704498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.704 [2024-10-01 15:58:55.704505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.704 [2024-10-01 15:58:55.704518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.704 [2024-10-01 15:58:55.714289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.704 [2024-10-01 15:58:55.714536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.704 [2024-10-01 15:58:55.714552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.704 [2024-10-01 15:58:55.714559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.704 [2024-10-01 15:58:55.714575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.704 [2024-10-01 15:58:55.714585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.704 [2024-10-01 15:58:55.714591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.704 [2024-10-01 15:58:55.714597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.704 [2024-10-01 15:58:55.714610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.704 [2024-10-01 15:58:55.724354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.704 [2024-10-01 15:58:55.724577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.704 [2024-10-01 15:58:55.724592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.704 [2024-10-01 15:58:55.724599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.704 [2024-10-01 15:58:55.724612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.704 [2024-10-01 15:58:55.724623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.704 [2024-10-01 15:58:55.724629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.705 [2024-10-01 15:58:55.724635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.705 [2024-10-01 15:58:55.724648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.705 [2024-10-01 15:58:55.734418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.705 [2024-10-01 15:58:55.734662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.705 [2024-10-01 15:58:55.734678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.705 [2024-10-01 15:58:55.734685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.705 [2024-10-01 15:58:55.734814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.705 [2024-10-01 15:58:55.734963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.705 [2024-10-01 15:58:55.734974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.705 [2024-10-01 15:58:55.734981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.705 [2024-10-01 15:58:55.735012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.705 [2024-10-01 15:58:55.744759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.705 [2024-10-01 15:58:55.744953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.705 [2024-10-01 15:58:55.744968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.705 [2024-10-01 15:58:55.744976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.705 [2024-10-01 15:58:55.744988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.705 [2024-10-01 15:58:55.745000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.705 [2024-10-01 15:58:55.745006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.705 [2024-10-01 15:58:55.745012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.705 [2024-10-01 15:58:55.745029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.705 [2024-10-01 15:58:55.756314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.705 [2024-10-01 15:58:55.756508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.705 [2024-10-01 15:58:55.756523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.705 [2024-10-01 15:58:55.756531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.705 [2024-10-01 15:58:55.756543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.705 [2024-10-01 15:58:55.756553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.705 [2024-10-01 15:58:55.756560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.705 [2024-10-01 15:58:55.756566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.705 [2024-10-01 15:58:55.756579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.705 [2024-10-01 15:58:55.768373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.705 [2024-10-01 15:58:55.768803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.705 [2024-10-01 15:58:55.768823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.705 [2024-10-01 15:58:55.768831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.705 [2024-10-01 15:58:55.768980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.705 [2024-10-01 15:58:55.769017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.705 [2024-10-01 15:58:55.769025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.705 [2024-10-01 15:58:55.769032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.705 [2024-10-01 15:58:55.769045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.705 [2024-10-01 15:58:55.778759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.705 [2024-10-01 15:58:55.779005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.705 [2024-10-01 15:58:55.779021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.705 [2024-10-01 15:58:55.779029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.705 [2024-10-01 15:58:55.779475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.705 [2024-10-01 15:58:55.779646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.705 [2024-10-01 15:58:55.779657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.705 [2024-10-01 15:58:55.779664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.705 [2024-10-01 15:58:55.779838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.705 [2024-10-01 15:58:55.790197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.705 [2024-10-01 15:58:55.790420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.705 [2024-10-01 15:58:55.790439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.705 [2024-10-01 15:58:55.790447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.705 [2024-10-01 15:58:55.790459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.705 [2024-10-01 15:58:55.790470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.705 [2024-10-01 15:58:55.790476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.705 [2024-10-01 15:58:55.790482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.705 [2024-10-01 15:58:55.790495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.705 [2024-10-01 15:58:55.801777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.705 [2024-10-01 15:58:55.801946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.705 [2024-10-01 15:58:55.801961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.705 [2024-10-01 15:58:55.801968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.705 [2024-10-01 15:58:55.801980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.705 [2024-10-01 15:58:55.801991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.705 [2024-10-01 15:58:55.801997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.705 [2024-10-01 15:58:55.802004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.705 [2024-10-01 15:58:55.802017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.705 [2024-10-01 15:58:55.813135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.705 [2024-10-01 15:58:55.813437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.705 [2024-10-01 15:58:55.813455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.705 [2024-10-01 15:58:55.813463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.705 [2024-10-01 15:58:55.813491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.705 [2024-10-01 15:58:55.813502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.705 [2024-10-01 15:58:55.813508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.705 [2024-10-01 15:58:55.813515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.705 [2024-10-01 15:58:55.813529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.705 [2024-10-01 15:58:55.824035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.705 [2024-10-01 15:58:55.824214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.705 [2024-10-01 15:58:55.824229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.705 [2024-10-01 15:58:55.824236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.705 [2024-10-01 15:58:55.824248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.705 [2024-10-01 15:58:55.824263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.706 [2024-10-01 15:58:55.824270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.706 [2024-10-01 15:58:55.824276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.706 [2024-10-01 15:58:55.824289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.706 [2024-10-01 15:58:55.834792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.706 [2024-10-01 15:58:55.835012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.706 [2024-10-01 15:58:55.835029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.706 [2024-10-01 15:58:55.835037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.706 [2024-10-01 15:58:55.835049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.706 [2024-10-01 15:58:55.835060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.706 [2024-10-01 15:58:55.835066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.706 [2024-10-01 15:58:55.835073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.706 [2024-10-01 15:58:55.835086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.706 [2024-10-01 15:58:55.844859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.706 [2024-10-01 15:58:55.845023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.706 [2024-10-01 15:58:55.845038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.706 [2024-10-01 15:58:55.845046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.706 [2024-10-01 15:58:55.845058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.706 [2024-10-01 15:58:55.845068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.706 [2024-10-01 15:58:55.845075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.706 [2024-10-01 15:58:55.845081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.706 [2024-10-01 15:58:55.845094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.706 [2024-10-01 15:58:55.856390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.706 [2024-10-01 15:58:55.856603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.706 [2024-10-01 15:58:55.856619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.706 [2024-10-01 15:58:55.856627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.706 [2024-10-01 15:58:55.856639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.706 [2024-10-01 15:58:55.856649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.706 [2024-10-01 15:58:55.856655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.706 [2024-10-01 15:58:55.856662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.706 [2024-10-01 15:58:55.856675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.706 [2024-10-01 15:58:55.867607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.706 [2024-10-01 15:58:55.867973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.706 [2024-10-01 15:58:55.867992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.706 [2024-10-01 15:58:55.868000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.706 [2024-10-01 15:58:55.868175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.706 [2024-10-01 15:58:55.868208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.706 [2024-10-01 15:58:55.868216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.706 [2024-10-01 15:58:55.868222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.706 [2024-10-01 15:58:55.868236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.706 [2024-10-01 15:58:55.878112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.706 [2024-10-01 15:58:55.878333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.706 [2024-10-01 15:58:55.878348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.706 [2024-10-01 15:58:55.878356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.706 [2024-10-01 15:58:55.878367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.706 [2024-10-01 15:58:55.878378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.706 [2024-10-01 15:58:55.878385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.706 [2024-10-01 15:58:55.878391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.706 [2024-10-01 15:58:55.878404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.706 [2024-10-01 15:58:55.890037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.706 [2024-10-01 15:58:55.890388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.706 [2024-10-01 15:58:55.890406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.706 [2024-10-01 15:58:55.890414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.706 [2024-10-01 15:58:55.890565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.706 [2024-10-01 15:58:55.890596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.706 [2024-10-01 15:58:55.890602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.706 [2024-10-01 15:58:55.890609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.706 [2024-10-01 15:58:55.890738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.706 [2024-10-01 15:58:55.902701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.706 [2024-10-01 15:58:55.902948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.706 [2024-10-01 15:58:55.902964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.706 [2024-10-01 15:58:55.902979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.706 [2024-10-01 15:58:55.902991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.706 [2024-10-01 15:58:55.903002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.706 [2024-10-01 15:58:55.903008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.706 [2024-10-01 15:58:55.903014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.706 [2024-10-01 15:58:55.903028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.706 [2024-10-01 15:58:55.913632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.706 [2024-10-01 15:58:55.913877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.707 [2024-10-01 15:58:55.913893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.707 [2024-10-01 15:58:55.913901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.707 [2024-10-01 15:58:55.913913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.707 [2024-10-01 15:58:55.913924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.707 [2024-10-01 15:58:55.913930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.707 [2024-10-01 15:58:55.913936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.707 [2024-10-01 15:58:55.913950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.707 [2024-10-01 15:58:55.925836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.707 [2024-10-01 15:58:55.926089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.707 [2024-10-01 15:58:55.926105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.707 [2024-10-01 15:58:55.926113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.707 [2024-10-01 15:58:55.926125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.707 [2024-10-01 15:58:55.926136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.707 [2024-10-01 15:58:55.926142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.707 [2024-10-01 15:58:55.926148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.707 [2024-10-01 15:58:55.926162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.707 [2024-10-01 15:58:55.937059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.707 [2024-10-01 15:58:55.937231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.707 [2024-10-01 15:58:55.937246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.707 [2024-10-01 15:58:55.937253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.707 [2024-10-01 15:58:55.937265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.707 [2024-10-01 15:58:55.937276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.707 [2024-10-01 15:58:55.937282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.707 [2024-10-01 15:58:55.937292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.707 [2024-10-01 15:58:55.937306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.707 [2024-10-01 15:58:55.948401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.707 [2024-10-01 15:58:55.948642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.707 [2024-10-01 15:58:55.948657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.707 [2024-10-01 15:58:55.948664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.707 [2024-10-01 15:58:55.948849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.707 [2024-10-01 15:58:55.948998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.707 [2024-10-01 15:58:55.949009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.707 [2024-10-01 15:58:55.949016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.707 [2024-10-01 15:58:55.949055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.707 [2024-10-01 15:58:55.958825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.707 [2024-10-01 15:58:55.959188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.707 [2024-10-01 15:58:55.959206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.707 [2024-10-01 15:58:55.959213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.707 [2024-10-01 15:58:55.959354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.707 [2024-10-01 15:58:55.959392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.707 [2024-10-01 15:58:55.959400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.707 [2024-10-01 15:58:55.959407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.707 [2024-10-01 15:58:55.959421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.707 [2024-10-01 15:58:55.969748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.707 [2024-10-01 15:58:55.969974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.707 [2024-10-01 15:58:55.969990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.707 [2024-10-01 15:58:55.969999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.707 [2024-10-01 15:58:55.970011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.707 [2024-10-01 15:58:55.970021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.707 [2024-10-01 15:58:55.970027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.707 [2024-10-01 15:58:55.970034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.707 [2024-10-01 15:58:55.970047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.707 11329.75 IOPS, 44.26 MiB/s [2024-10-01 15:58:55.981757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.707 [2024-10-01 15:58:55.981881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.707 [2024-10-01 15:58:55.981896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.707 [2024-10-01 15:58:55.981904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.707 [2024-10-01 15:58:55.981916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.707 [2024-10-01 15:58:55.981927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.707 [2024-10-01 15:58:55.981933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.707 [2024-10-01 15:58:55.981939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.707 [2024-10-01 15:58:55.981952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.707 [2024-10-01 15:58:55.993479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.707 [2024-10-01 15:58:55.993726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.707 [2024-10-01 15:58:55.993743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.707 [2024-10-01 15:58:55.993750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.707 [2024-10-01 15:58:55.993762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.707 [2024-10-01 15:58:55.993773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.707 [2024-10-01 15:58:55.993779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.707 [2024-10-01 15:58:55.993786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.707 [2024-10-01 15:58:55.993799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.707 [2024-10-01 15:58:56.005764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.707 [2024-10-01 15:58:56.006192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.707 [2024-10-01 15:58:56.006211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.707 [2024-10-01 15:58:56.006219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.707 [2024-10-01 15:58:56.006363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.707 [2024-10-01 15:58:56.006505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.707 [2024-10-01 15:58:56.006516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.707 [2024-10-01 15:58:56.006523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.707 [2024-10-01 15:58:56.006551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.707 [2024-10-01 15:58:56.016968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.708 [2024-10-01 15:58:56.017283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.708 [2024-10-01 15:58:56.017302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.708 [2024-10-01 15:58:56.017310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.708 [2024-10-01 15:58:56.017459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.708 [2024-10-01 15:58:56.017486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.708 [2024-10-01 15:58:56.017494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.708 [2024-10-01 15:58:56.017500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.708 [2024-10-01 15:58:56.017515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.708 [2024-10-01 15:58:56.028035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.708 [2024-10-01 15:58:56.028451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.708 [2024-10-01 15:58:56.028470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.708 [2024-10-01 15:58:56.028478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.708 [2024-10-01 15:58:56.028622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.708 [2024-10-01 15:58:56.028653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.708 [2024-10-01 15:58:56.028660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.708 [2024-10-01 15:58:56.028667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.708 [2024-10-01 15:58:56.028682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.708 [2024-10-01 15:58:56.039668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.708 [2024-10-01 15:58:56.040005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.708 [2024-10-01 15:58:56.040023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.708 [2024-10-01 15:58:56.040031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.708 [2024-10-01 15:58:56.040176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.708 [2024-10-01 15:58:56.040207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.708 [2024-10-01 15:58:56.040214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.708 [2024-10-01 15:58:56.040221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.708 [2024-10-01 15:58:56.040236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.708 [2024-10-01 15:58:56.050459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.708 [2024-10-01 15:58:56.050836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.708 [2024-10-01 15:58:56.050854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.708 [2024-10-01 15:58:56.050868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.708 [2024-10-01 15:58:56.051011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.708 [2024-10-01 15:58:56.051037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.708 [2024-10-01 15:58:56.051044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.708 [2024-10-01 15:58:56.051055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.708 [2024-10-01 15:58:56.051069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.708 [2024-10-01 15:58:56.062148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.708 [2024-10-01 15:58:56.062378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.708 [2024-10-01 15:58:56.062392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.708 [2024-10-01 15:58:56.062400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.708 [2024-10-01 15:58:56.062412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.708 [2024-10-01 15:58:56.062423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.708 [2024-10-01 15:58:56.062430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.708 [2024-10-01 15:58:56.062437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.708 [2024-10-01 15:58:56.062450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.708 [2024-10-01 15:58:56.074316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.708 [2024-10-01 15:58:56.074497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.708 [2024-10-01 15:58:56.074512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.708 [2024-10-01 15:58:56.074519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.708 [2024-10-01 15:58:56.074531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.708 [2024-10-01 15:58:56.074542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.708 [2024-10-01 15:58:56.074548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.708 [2024-10-01 15:58:56.074554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.708 [2024-10-01 15:58:56.074567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.708 [2024-10-01 15:58:56.085930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.708 [2024-10-01 15:58:56.086108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.708 [2024-10-01 15:58:56.086122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.708 [2024-10-01 15:58:56.086130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.708 [2024-10-01 15:58:56.086141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.708 [2024-10-01 15:58:56.086152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.708 [2024-10-01 15:58:56.086159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.708 [2024-10-01 15:58:56.086166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.708 [2024-10-01 15:58:56.086179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.708 [2024-10-01 15:58:56.096679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.708 [2024-10-01 15:58:56.096839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.708 [2024-10-01 15:58:56.096858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.708 [2024-10-01 15:58:56.096870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.708 [2024-10-01 15:58:56.096882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.708 [2024-10-01 15:58:56.096893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.708 [2024-10-01 15:58:56.096899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.708 [2024-10-01 15:58:56.096906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.708 [2024-10-01 15:58:56.096919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.708 [2024-10-01 15:58:56.108841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.708 [2024-10-01 15:58:56.109172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.708 [2024-10-01 15:58:56.109191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.708 [2024-10-01 15:58:56.109199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.708 [2024-10-01 15:58:56.109227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.708 [2024-10-01 15:58:56.109239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.708 [2024-10-01 15:58:56.109245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.709 [2024-10-01 15:58:56.109252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.709 [2024-10-01 15:58:56.109265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.709 [2024-10-01 15:58:56.120002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.709 [2024-10-01 15:58:56.120321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.709 [2024-10-01 15:58:56.120339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.709 [2024-10-01 15:58:56.120347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.709 [2024-10-01 15:58:56.120375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.709 [2024-10-01 15:58:56.120387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.709 [2024-10-01 15:58:56.120393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.709 [2024-10-01 15:58:56.120400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.709 [2024-10-01 15:58:56.120561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.709 [2024-10-01 15:58:56.131246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.709 [2024-10-01 15:58:56.131593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.709 [2024-10-01 15:58:56.131611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.709 [2024-10-01 15:58:56.131620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.709 [2024-10-01 15:58:56.131765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.709 [2024-10-01 15:58:56.131796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.709 [2024-10-01 15:58:56.131803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.709 [2024-10-01 15:58:56.131809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.709 [2024-10-01 15:58:56.131823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.709 [2024-10-01 15:58:56.142332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.709 [2024-10-01 15:58:56.142644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.709 [2024-10-01 15:58:56.142662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.709 [2024-10-01 15:58:56.142670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.709 [2024-10-01 15:58:56.142927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.709 [2024-10-01 15:58:56.142960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.709 [2024-10-01 15:58:56.142968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.709 [2024-10-01 15:58:56.142975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.709 [2024-10-01 15:58:56.142988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.709 [2024-10-01 15:58:56.152424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.709 [2024-10-01 15:58:56.152605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.709 [2024-10-01 15:58:56.152619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.709 [2024-10-01 15:58:56.152626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.709 [2024-10-01 15:58:56.152638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.709 [2024-10-01 15:58:56.152649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.709 [2024-10-01 15:58:56.152655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.709 [2024-10-01 15:58:56.152662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.709 [2024-10-01 15:58:56.152675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.709 [2024-10-01 15:58:56.162854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.709 [2024-10-01 15:58:56.163036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.709 [2024-10-01 15:58:56.163050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.709 [2024-10-01 15:58:56.163057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.709 [2024-10-01 15:58:56.163069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.709 [2024-10-01 15:58:56.163081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.709 [2024-10-01 15:58:56.163087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.709 [2024-10-01 15:58:56.163093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.709 [2024-10-01 15:58:56.163110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.709 [2024-10-01 15:58:56.173609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.709 [2024-10-01 15:58:56.173971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.709 [2024-10-01 15:58:56.173990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.709 [2024-10-01 15:58:56.173998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.709 [2024-10-01 15:58:56.174028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.709 [2024-10-01 15:58:56.174039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.709 [2024-10-01 15:58:56.174045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.709 [2024-10-01 15:58:56.174052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.709 [2024-10-01 15:58:56.174065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.709 [2024-10-01 15:58:56.184114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.709 [2024-10-01 15:58:56.184244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.709 [2024-10-01 15:58:56.184259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.709 [2024-10-01 15:58:56.184266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.709 [2024-10-01 15:58:56.184395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.709 [2024-10-01 15:58:56.184425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.709 [2024-10-01 15:58:56.184432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.709 [2024-10-01 15:58:56.184438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.709 [2024-10-01 15:58:56.184453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.709 [2024-10-01 15:58:56.194601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.709 [2024-10-01 15:58:56.194852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.709 [2024-10-01 15:58:56.194873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.709 [2024-10-01 15:58:56.194881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.709 [2024-10-01 15:58:56.194893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.709 [2024-10-01 15:58:56.194904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.709 [2024-10-01 15:58:56.194910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.709 [2024-10-01 15:58:56.194917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.709 [2024-10-01 15:58:56.194929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.709 [2024-10-01 15:58:56.205954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.709 [2024-10-01 15:58:56.206129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.709 [2024-10-01 15:58:56.206143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.709 [2024-10-01 15:58:56.206154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.709 [2024-10-01 15:58:56.206166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.709 [2024-10-01 15:58:56.206177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.709 [2024-10-01 15:58:56.206183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.709 [2024-10-01 15:58:56.206189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.709 [2024-10-01 15:58:56.206202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.709 [2024-10-01 15:58:56.218044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.710 [2024-10-01 15:58:56.218249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.710 [2024-10-01 15:58:56.218264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.710 [2024-10-01 15:58:56.218272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.710 [2024-10-01 15:58:56.218285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.710 [2024-10-01 15:58:56.218296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.710 [2024-10-01 15:58:56.218302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.710 [2024-10-01 15:58:56.218309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.710 [2024-10-01 15:58:56.218324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.710 [2024-10-01 15:58:56.229003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.710 [2024-10-01 15:58:56.229167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.710 [2024-10-01 15:58:56.229181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.710 [2024-10-01 15:58:56.229189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.710 [2024-10-01 15:58:56.229201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.710 [2024-10-01 15:58:56.229212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.710 [2024-10-01 15:58:56.229219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.710 [2024-10-01 15:58:56.229225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.710 [2024-10-01 15:58:56.229238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.710 [2024-10-01 15:58:56.240628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.710 [2024-10-01 15:58:56.240753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.710 [2024-10-01 15:58:56.240767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.710 [2024-10-01 15:58:56.240776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.710 [2024-10-01 15:58:56.240788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.710 [2024-10-01 15:58:56.240799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.710 [2024-10-01 15:58:56.240810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.710 [2024-10-01 15:58:56.240817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.710 [2024-10-01 15:58:56.240831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.710 [2024-10-01 15:58:56.251887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.710 [2024-10-01 15:58:56.252063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.710 [2024-10-01 15:58:56.252079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.710 [2024-10-01 15:58:56.252087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.710 [2024-10-01 15:58:56.252100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.710 [2024-10-01 15:58:56.252111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.710 [2024-10-01 15:58:56.252117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.710 [2024-10-01 15:58:56.252124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.710 [2024-10-01 15:58:56.252138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.710 [2024-10-01 15:58:56.263425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.710 [2024-10-01 15:58:56.263552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.710 [2024-10-01 15:58:56.263566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.710 [2024-10-01 15:58:56.263574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.710 [2024-10-01 15:58:56.263585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.710 [2024-10-01 15:58:56.263596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.710 [2024-10-01 15:58:56.263602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.710 [2024-10-01 15:58:56.263608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.710 [2024-10-01 15:58:56.263621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.710 [2024-10-01 15:58:56.274289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.710 [2024-10-01 15:58:56.274465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.710 [2024-10-01 15:58:56.274479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.710 [2024-10-01 15:58:56.274487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.710 [2024-10-01 15:58:56.274498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.710 [2024-10-01 15:58:56.274510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.710 [2024-10-01 15:58:56.274516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.710 [2024-10-01 15:58:56.274523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.710 [2024-10-01 15:58:56.274536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.710 [2024-10-01 15:58:56.285780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.710 [2024-10-01 15:58:56.286039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.710 [2024-10-01 15:58:56.286055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.710 [2024-10-01 15:58:56.286063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.710 [2024-10-01 15:58:56.286075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.710 [2024-10-01 15:58:56.286086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.710 [2024-10-01 15:58:56.286092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.710 [2024-10-01 15:58:56.286099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.710 [2024-10-01 15:58:56.286112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.710 [2024-10-01 15:58:56.296496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.710 [2024-10-01 15:58:56.296668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.710 [2024-10-01 15:58:56.296682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.710 [2024-10-01 15:58:56.296689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.710 [2024-10-01 15:58:56.296701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.710 [2024-10-01 15:58:56.296712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.710 [2024-10-01 15:58:56.296718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.710 [2024-10-01 15:58:56.296724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.710 [2024-10-01 15:58:56.296738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.710 [2024-10-01 15:58:56.307347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.710 [2024-10-01 15:58:56.307624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.710 [2024-10-01 15:58:56.307641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.710 [2024-10-01 15:58:56.307649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.710 [2024-10-01 15:58:56.307661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.710 [2024-10-01 15:58:56.307672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.710 [2024-10-01 15:58:56.307678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.710 [2024-10-01 15:58:56.307684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.710 [2024-10-01 15:58:56.307774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.710 [2024-10-01 15:58:56.319059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.710 [2024-10-01 15:58:56.319180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.710 [2024-10-01 15:58:56.319194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.710 [2024-10-01 15:58:56.319201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.710 [2024-10-01 15:58:56.319217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.710 [2024-10-01 15:58:56.319228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.710 [2024-10-01 15:58:56.319235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.710 [2024-10-01 15:58:56.319241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.710 [2024-10-01 15:58:56.319255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.710 [2024-10-01 15:58:56.329732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.710 [2024-10-01 15:58:56.329900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.710 [2024-10-01 15:58:56.329915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.710 [2024-10-01 15:58:56.329922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.711 [2024-10-01 15:58:56.329934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.711 [2024-10-01 15:58:56.329945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.711 [2024-10-01 15:58:56.329952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.711 [2024-10-01 15:58:56.329958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.711 [2024-10-01 15:58:56.329971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.711 [2024-10-01 15:58:56.342077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.711 [2024-10-01 15:58:56.342284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.711 [2024-10-01 15:58:56.342299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.711 [2024-10-01 15:58:56.342307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.711 [2024-10-01 15:58:56.342935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.711 [2024-10-01 15:58:56.343176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.711 [2024-10-01 15:58:56.343188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.711 [2024-10-01 15:58:56.343196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.711 [2024-10-01 15:58:56.343786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.711 [2024-10-01 15:58:56.353061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.711 [2024-10-01 15:58:56.353240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.711 [2024-10-01 15:58:56.353255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.711 [2024-10-01 15:58:56.353263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.711 [2024-10-01 15:58:56.353390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.711 [2024-10-01 15:58:56.353497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.711 [2024-10-01 15:58:56.353516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.711 [2024-10-01 15:58:56.353527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.711 [2024-10-01 15:58:56.353554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.711 [2024-10-01 15:58:56.363974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.711 [2024-10-01 15:58:56.364201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.711 [2024-10-01 15:58:56.364218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.711 [2024-10-01 15:58:56.364227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.711 [2024-10-01 15:58:56.364239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.711 [2024-10-01 15:58:56.364250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.711 [2024-10-01 15:58:56.364256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.711 [2024-10-01 15:58:56.364263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.711 [2024-10-01 15:58:56.364403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.711 [2024-10-01 15:58:56.375473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.711 [2024-10-01 15:58:56.375744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.711 [2024-10-01 15:58:56.375762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.711 [2024-10-01 15:58:56.375770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.711 [2024-10-01 15:58:56.375938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.711 [2024-10-01 15:58:56.376084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.711 [2024-10-01 15:58:56.376095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.711 [2024-10-01 15:58:56.376103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.711 [2024-10-01 15:58:56.376147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.711 [2024-10-01 15:58:56.386348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.711 [2024-10-01 15:58:56.386494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.711 [2024-10-01 15:58:56.386510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.711 [2024-10-01 15:58:56.386518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.711 [2024-10-01 15:58:56.386647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.711 [2024-10-01 15:58:56.386688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.711 [2024-10-01 15:58:56.386696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.711 [2024-10-01 15:58:56.386702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.711 [2024-10-01 15:58:56.386717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.711 [2024-10-01 15:58:56.397338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.711 [2024-10-01 15:58:56.397588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.711 [2024-10-01 15:58:56.397608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.711 [2024-10-01 15:58:56.397616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.711 [2024-10-01 15:58:56.397629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.711 [2024-10-01 15:58:56.397640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.711 [2024-10-01 15:58:56.397646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.711 [2024-10-01 15:58:56.397653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.711 [2024-10-01 15:58:56.397666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.711 [2024-10-01 15:58:56.410054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.711 [2024-10-01 15:58:56.410263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.711 [2024-10-01 15:58:56.410279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.711 [2024-10-01 15:58:56.410286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.711 [2024-10-01 15:58:56.410299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.711 [2024-10-01 15:58:56.410310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.711 [2024-10-01 15:58:56.410316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.711 [2024-10-01 15:58:56.410323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.711 [2024-10-01 15:58:56.410337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.711 [2024-10-01 15:58:56.421999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.711 [2024-10-01 15:58:56.422120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.712 [2024-10-01 15:58:56.422133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.712 [2024-10-01 15:58:56.422141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.712 [2024-10-01 15:58:56.422152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.712 [2024-10-01 15:58:56.422174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.712 [2024-10-01 15:58:56.422182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.712 [2024-10-01 15:58:56.422188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.712 [2024-10-01 15:58:56.422440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.712 [2024-10-01 15:58:56.433502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.712 [2024-10-01 15:58:56.433705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.712 [2024-10-01 15:58:56.433721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.712 [2024-10-01 15:58:56.433729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.712 [2024-10-01 15:58:56.433742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.712 [2024-10-01 15:58:56.433757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.712 [2024-10-01 15:58:56.433764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.712 [2024-10-01 15:58:56.433771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.712 [2024-10-01 15:58:56.433784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.712 [2024-10-01 15:58:56.444988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.712 [2024-10-01 15:58:56.445186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.712 [2024-10-01 15:58:56.445200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.712 [2024-10-01 15:58:56.445207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.712 [2024-10-01 15:58:56.445219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.712 [2024-10-01 15:58:56.445230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.712 [2024-10-01 15:58:56.445237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.712 [2024-10-01 15:58:56.445243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.712 [2024-10-01 15:58:56.445256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.712 [2024-10-01 15:58:56.455952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.712 [2024-10-01 15:58:56.456173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.712 [2024-10-01 15:58:56.456189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.712 [2024-10-01 15:58:56.456196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.712 [2024-10-01 15:58:56.456208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.712 [2024-10-01 15:58:56.456219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.712 [2024-10-01 15:58:56.456225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.712 [2024-10-01 15:58:56.456232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.712 [2024-10-01 15:58:56.456245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.712 [2024-10-01 15:58:56.467899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.712 [2024-10-01 15:58:56.468353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.712 [2024-10-01 15:58:56.468372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.712 [2024-10-01 15:58:56.468380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.712 [2024-10-01 15:58:56.468411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.712 [2024-10-01 15:58:56.468422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.712 [2024-10-01 15:58:56.468429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.712 [2024-10-01 15:58:56.468435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.712 [2024-10-01 15:58:56.468453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.712 [2024-10-01 15:58:56.478853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.712 [2024-10-01 15:58:56.479053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.712 [2024-10-01 15:58:56.479067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.712 [2024-10-01 15:58:56.479075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.712 [2024-10-01 15:58:56.479087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.712 [2024-10-01 15:58:56.479097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.712 [2024-10-01 15:58:56.479104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.712 [2024-10-01 15:58:56.479110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.712 [2024-10-01 15:58:56.479123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.712 [2024-10-01 15:58:56.489562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.712 [2024-10-01 15:58:56.489774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.712 [2024-10-01 15:58:56.489790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.712 [2024-10-01 15:58:56.489798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.712 [2024-10-01 15:58:56.489932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.712 [2024-10-01 15:58:56.489963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.712 [2024-10-01 15:58:56.489970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.712 [2024-10-01 15:58:56.489977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.712 [2024-10-01 15:58:56.489990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.712 [2024-10-01 15:58:56.500111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.712 [2024-10-01 15:58:56.500228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.712 [2024-10-01 15:58:56.500242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.712 [2024-10-01 15:58:56.500250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.712 [2024-10-01 15:58:56.500261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.712 [2024-10-01 15:58:56.500272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.712 [2024-10-01 15:58:56.500278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.712 [2024-10-01 15:58:56.500285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.712 [2024-10-01 15:58:56.500297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.712 [2024-10-01 15:58:56.510340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.712 [2024-10-01 15:58:56.510595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.712 [2024-10-01 15:58:56.510612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.712 [2024-10-01 15:58:56.510622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.712 [2024-10-01 15:58:56.510702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.712 [2024-10-01 15:58:56.512804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.712 [2024-10-01 15:58:56.512821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.712 [2024-10-01 15:58:56.512828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.712 [2024-10-01 15:58:56.513437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.712 [2024-10-01 15:58:56.521169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.712 [2024-10-01 15:58:56.524227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.712 [2024-10-01 15:58:56.524248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.713 [2024-10-01 15:58:56.524256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.713 [2024-10-01 15:58:56.524542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.713 [2024-10-01 15:58:56.525134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.713 [2024-10-01 15:58:56.525148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.713 [2024-10-01 15:58:56.525155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.713 [2024-10-01 15:58:56.525439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.713 [2024-10-01 15:58:56.531902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.713 [2024-10-01 15:58:56.532194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.713 [2024-10-01 15:58:56.532210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.713 [2024-10-01 15:58:56.532219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.713 [2024-10-01 15:58:56.532298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.713 [2024-10-01 15:58:56.534200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.713 [2024-10-01 15:58:56.534218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.713 [2024-10-01 15:58:56.534225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.713 [2024-10-01 15:58:56.534427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.713 [2024-10-01 15:58:56.543539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.713 [2024-10-01 15:58:56.543849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.713 [2024-10-01 15:58:56.543871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.713 [2024-10-01 15:58:56.543880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.713 [2024-10-01 15:58:56.546757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.713 [2024-10-01 15:58:56.547227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.713 [2024-10-01 15:58:56.547245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.713 [2024-10-01 15:58:56.547253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.713 [2024-10-01 15:58:56.547418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.713 [2024-10-01 15:58:56.558068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.713 [2024-10-01 15:58:56.558355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.713 [2024-10-01 15:58:56.558372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.713 [2024-10-01 15:58:56.558381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.713 [2024-10-01 15:58:56.558523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.713 [2024-10-01 15:58:56.558549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.713 [2024-10-01 15:58:56.558556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.713 [2024-10-01 15:58:56.558562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.713 [2024-10-01 15:58:56.558577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.713 [2024-10-01 15:58:56.568661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.713 [2024-10-01 15:58:56.568887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.713 [2024-10-01 15:58:56.568904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.713 [2024-10-01 15:58:56.568912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.713 [2024-10-01 15:58:56.568924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.713 [2024-10-01 15:58:56.568936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.713 [2024-10-01 15:58:56.568943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.713 [2024-10-01 15:58:56.568949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.713 [2024-10-01 15:58:56.568962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.713 [2024-10-01 15:58:56.581260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.713 [2024-10-01 15:58:56.581459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.713 [2024-10-01 15:58:56.581475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.713 [2024-10-01 15:58:56.581484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.713 [2024-10-01 15:58:56.581496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.713 [2024-10-01 15:58:56.581507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.713 [2024-10-01 15:58:56.581513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.713 [2024-10-01 15:58:56.581520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.713 [2024-10-01 15:58:56.581533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.713 [2024-10-01 15:58:56.594076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.713 [2024-10-01 15:58:56.594404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.713 [2024-10-01 15:58:56.594422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.713 [2024-10-01 15:58:56.594430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.713 [2024-10-01 15:58:56.594572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.713 [2024-10-01 15:58:56.594724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.713 [2024-10-01 15:58:56.594735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.713 [2024-10-01 15:58:56.594742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.713 [2024-10-01 15:58:56.594772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.713 [2024-10-01 15:58:56.600036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-10-01 15:58:56.600056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.713 [2024-10-01 15:58:56.600068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-10-01 15:58:56.600076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.713 [2024-10-01 15:58:56.600085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-10-01 15:58:56.600091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.713 [2024-10-01 15:58:56.600099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-10-01 15:58:56.600106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.713 [2024-10-01 15:58:56.600114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-10-01 15:58:56.600120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.713 [2024-10-01 15:58:56.600128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-10-01 15:58:56.600135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.713 [2024-10-01 15:58:56.600143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-10-01 15:58:56.600149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.713 [2024-10-01 15:58:56.600158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-10-01 15:58:56.600164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.713 [2024-10-01 15:58:56.600172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-10-01 15:58:56.600178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.713 [2024-10-01 15:58:56.600190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-10-01 15:58:56.600196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.713 [2024-10-01 15:58:56.600204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-10-01 15:58:56.600210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.713 [2024-10-01 15:58:56.600219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-10-01 15:58:56.600225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.713 [2024-10-01 15:58:56.600233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-10-01 15:58:56.600240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.713 [2024-10-01 15:58:56.600247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.713 [2024-10-01 15:58:56.600254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-10-01 15:58:56.600535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-10-01 15:58:56.600549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-10-01 15:58:56.600566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-10-01 15:58:56.600581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-10-01 15:58:56.600595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.714 [2024-10-01 15:58:56.600609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.714 [2024-10-01 15:58:56.600827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.714 [2024-10-01 15:58:56.600835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.715 [2024-10-01 15:58:56.600841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.600849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.600855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.600867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.600874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.600882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.600889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.600897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.600903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.600911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.600918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.600926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.600934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.600943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.600949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.600957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.600964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.600972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.600979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.600987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.600993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.715 [2024-10-01 15:58:56.601399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.715 [2024-10-01 15:58:56.601406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-10-01 15:58:56.601420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.716 [2024-10-01 15:58:56.601435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.716 [2024-10-01 15:58:56.601915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.716 [2024-10-01 15:58:56.601942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27448 len:8 PRP1 0x0 PRP2 0x0 00:24:57.716 [2024-10-01 15:58:56.601949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.601958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.716 [2024-10-01 15:58:56.601964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.716 [2024-10-01 15:58:56.601970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27456 len:8 PRP1 0x0 PRP2 0x0 00:24:57.716 [2024-10-01 15:58:56.601976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.716 [2024-10-01 15:58:56.602015] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x993460 was disconnected and freed. reset controller. 00:24:57.716 [2024-10-01 15:58:56.602879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.716 [2024-10-01 15:58:56.602920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.716 [2024-10-01 15:58:56.603057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.716 [2024-10-01 15:58:56.603070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.716 [2024-10-01 15:58:56.603078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.716 [2024-10-01 15:58:56.603089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.716 [2024-10-01 15:58:56.603099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.716 [2024-10-01 15:58:56.603106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.716 [2024-10-01 15:58:56.603112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.716 [2024-10-01 15:58:56.603126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.717 [2024-10-01 15:58:56.604152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.717 [2024-10-01 15:58:56.604324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.717 [2024-10-01 15:58:56.604338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.717 [2024-10-01 15:58:56.604345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.717 [2024-10-01 15:58:56.604359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.717 [2024-10-01 15:58:56.604370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.717 [2024-10-01 15:58:56.604376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.717 [2024-10-01 15:58:56.604383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.717 [2024-10-01 15:58:56.604395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.717 [2024-10-01 15:58:56.614387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.717 [2024-10-01 15:58:56.614417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.717 [2024-10-01 15:58:56.614655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.717 [2024-10-01 15:58:56.614668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.717 [2024-10-01 15:58:56.614676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.717 [2024-10-01 15:58:56.614841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.717 [2024-10-01 15:58:56.614850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.717 [2024-10-01 15:58:56.614857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.717 [2024-10-01 15:58:56.614870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.717 [2024-10-01 15:58:56.614883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.717 [2024-10-01 15:58:56.614890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.717 [2024-10-01 15:58:56.614896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.717 [2024-10-01 15:58:56.614903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.717 [2024-10-01 15:58:56.614917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.717 [2024-10-01 15:58:56.614924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.717 [2024-10-01 15:58:56.614929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.717 [2024-10-01 15:58:56.614936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.717 [2024-10-01 15:58:56.614948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.717 [2024-10-01 15:58:56.624453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.717 [2024-10-01 15:58:56.624723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.717 [2024-10-01 15:58:56.624739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.717 [2024-10-01 15:58:56.624746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.717 [2024-10-01 15:58:56.624765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.717 [2024-10-01 15:58:56.624778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.717 [2024-10-01 15:58:56.624791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.717 [2024-10-01 15:58:56.624801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.717 [2024-10-01 15:58:56.624807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.717 [2024-10-01 15:58:56.624819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.717 [2024-10-01 15:58:56.624982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.717 [2024-10-01 15:58:56.624993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.717 [2024-10-01 15:58:56.625000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.717 [2024-10-01 15:58:56.625011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.717 [2024-10-01 15:58:56.625021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.717 [2024-10-01 15:58:56.625027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.717 [2024-10-01 15:58:56.625033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.717 [2024-10-01 15:58:56.625045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.717 [2024-10-01 15:58:56.634518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.717 [2024-10-01 15:58:56.634763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.717 [2024-10-01 15:58:56.634778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.717 [2024-10-01 15:58:56.634786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.717 [2024-10-01 15:58:56.634797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.717 [2024-10-01 15:58:56.634808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.717 [2024-10-01 15:58:56.634814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.717 [2024-10-01 15:58:56.634820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.717 [2024-10-01 15:58:56.634833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.717 [2024-10-01 15:58:56.634853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.717 [2024-10-01 15:58:56.635016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.717 [2024-10-01 15:58:56.635028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.717 [2024-10-01 15:58:56.635034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.717 [2024-10-01 15:58:56.635751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.717 [2024-10-01 15:58:56.635897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.717 [2024-10-01 15:58:56.635907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.717 [2024-10-01 15:58:56.635913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.717 [2024-10-01 15:58:56.635928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.717 [2024-10-01 15:58:56.645236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.717 [2024-10-01 15:58:56.645282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.717 [2024-10-01 15:58:56.645520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.717 [2024-10-01 15:58:56.645533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.717 [2024-10-01 15:58:56.645540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.717 [2024-10-01 15:58:56.646032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.717 [2024-10-01 15:58:56.646049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.717 [2024-10-01 15:58:56.646056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.717 [2024-10-01 15:58:56.646066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.717 [2024-10-01 15:58:56.646221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.717 [2024-10-01 15:58:56.646231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.717 [2024-10-01 15:58:56.646238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.717 [2024-10-01 15:58:56.646244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.717 [2024-10-01 15:58:56.646274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.717 [2024-10-01 15:58:56.646281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.717 [2024-10-01 15:58:56.646287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.717 [2024-10-01 15:58:56.646293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.717 [2024-10-01 15:58:56.646306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.717 [2024-10-01 15:58:56.655300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.717 [2024-10-01 15:58:56.655547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.717 [2024-10-01 15:58:56.655562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.717 [2024-10-01 15:58:56.655569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.717 [2024-10-01 15:58:56.655589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.717 [2024-10-01 15:58:56.655602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.717 [2024-10-01 15:58:56.655615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.717 [2024-10-01 15:58:56.655622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.717 [2024-10-01 15:58:56.655629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.717 [2024-10-01 15:58:56.655640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.717 [2024-10-01 15:58:56.655790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.717 [2024-10-01 15:58:56.655800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.717 [2024-10-01 15:58:56.655807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.717 [2024-10-01 15:58:56.655819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.717 [2024-10-01 15:58:56.655833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.717 [2024-10-01 15:58:56.655839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.717 [2024-10-01 15:58:56.655844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.717 [2024-10-01 15:58:56.655856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.718 [2024-10-01 15:58:56.667748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.718 [2024-10-01 15:58:56.667770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.718 [2024-10-01 15:58:56.668086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.718 [2024-10-01 15:58:56.668103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.718 [2024-10-01 15:58:56.668111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.718 [2024-10-01 15:58:56.668258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.718 [2024-10-01 15:58:56.668267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.718 [2024-10-01 15:58:56.668274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.718 [2024-10-01 15:58:56.668416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.718 [2024-10-01 15:58:56.668429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.718 [2024-10-01 15:58:56.668576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.718 [2024-10-01 15:58:56.668586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.718 [2024-10-01 15:58:56.668593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.718 [2024-10-01 15:58:56.668602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.718 [2024-10-01 15:58:56.668609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.718 [2024-10-01 15:58:56.668615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.718 [2024-10-01 15:58:56.668644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.718 [2024-10-01 15:58:56.668652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.718 [2024-10-01 15:58:56.678954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.718 [2024-10-01 15:58:56.678975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.718 [2024-10-01 15:58:56.679184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.718 [2024-10-01 15:58:56.679196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.718 [2024-10-01 15:58:56.679204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.718 [2024-10-01 15:58:56.679395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.718 [2024-10-01 15:58:56.679405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.718 [2024-10-01 15:58:56.679412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.718 [2024-10-01 15:58:56.679612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.718 [2024-10-01 15:58:56.679625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.718 [2024-10-01 15:58:56.679718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.718 [2024-10-01 15:58:56.679726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.718 [2024-10-01 15:58:56.679733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.718 [2024-10-01 15:58:56.679742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.718 [2024-10-01 15:58:56.679748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.718 [2024-10-01 15:58:56.679754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.718 [2024-10-01 15:58:56.679775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.718 [2024-10-01 15:58:56.679782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.718 [2024-10-01 15:58:56.689851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.718 [2024-10-01 15:58:56.689878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.718 [2024-10-01 15:58:56.690133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.718 [2024-10-01 15:58:56.690147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.718 [2024-10-01 15:58:56.690155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.718 [2024-10-01 15:58:56.690375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.718 [2024-10-01 15:58:56.690386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.718 [2024-10-01 15:58:56.690393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.718 [2024-10-01 15:58:56.690405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.718 [2024-10-01 15:58:56.690414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.718 [2024-10-01 15:58:56.690433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.718 [2024-10-01 15:58:56.690440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.718 [2024-10-01 15:58:56.690446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.718 [2024-10-01 15:58:56.690455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.718 [2024-10-01 15:58:56.690462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.718 [2024-10-01 15:58:56.690468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.718 [2024-10-01 15:58:56.690482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.718 [2024-10-01 15:58:56.690488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.718 [2024-10-01 15:58:56.700116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.718 [2024-10-01 15:58:56.700137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.718 [2024-10-01 15:58:56.700296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.718 [2024-10-01 15:58:56.700312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.718 [2024-10-01 15:58:56.700319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.718 [2024-10-01 15:58:56.700532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.718 [2024-10-01 15:58:56.700542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.718 [2024-10-01 15:58:56.700548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.718 [2024-10-01 15:58:56.700560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.718 [2024-10-01 15:58:56.700569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.718 [2024-10-01 15:58:56.700579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.718 [2024-10-01 15:58:56.700585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.718 [2024-10-01 15:58:56.700591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.718 [2024-10-01 15:58:56.700600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.718 [2024-10-01 15:58:56.700606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.718 [2024-10-01 15:58:56.700612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.718 [2024-10-01 15:58:56.700625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.718 [2024-10-01 15:58:56.700632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.718 [2024-10-01 15:58:56.711441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.718 [2024-10-01 15:58:56.711461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.718 [2024-10-01 15:58:56.711678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.718 [2024-10-01 15:58:56.711690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.718 [2024-10-01 15:58:56.711697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.718 [2024-10-01 15:58:56.711899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.718 [2024-10-01 15:58:56.711921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.718 [2024-10-01 15:58:56.711928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.718 [2024-10-01 15:58:56.712895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.718 [2024-10-01 15:58:56.712911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.718 [2024-10-01 15:58:56.713146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.718 [2024-10-01 15:58:56.713156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.718 [2024-10-01 15:58:56.713162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.718 [2024-10-01 15:58:56.713172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.718 [2024-10-01 15:58:56.713178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.718 [2024-10-01 15:58:56.713187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.718 [2024-10-01 15:58:56.713339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.718 [2024-10-01 15:58:56.713348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.718 [2024-10-01 15:58:56.723008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.718 [2024-10-01 15:58:56.723028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.718 [2024-10-01 15:58:56.723241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.718 [2024-10-01 15:58:56.723254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.718 [2024-10-01 15:58:56.723261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.718 [2024-10-01 15:58:56.723398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.718 [2024-10-01 15:58:56.723409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.718 [2024-10-01 15:58:56.723415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.718 [2024-10-01 15:58:56.723427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.718 [2024-10-01 15:58:56.723436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.718 [2024-10-01 15:58:56.723446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.718 [2024-10-01 15:58:56.723453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.718 [2024-10-01 15:58:56.723459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.718 [2024-10-01 15:58:56.723468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.718 [2024-10-01 15:58:56.723474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.718 [2024-10-01 15:58:56.723480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.718 [2024-10-01 15:58:56.723493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.718 [2024-10-01 15:58:56.723500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.719 [2024-10-01 15:58:56.735246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.719 [2024-10-01 15:58:56.735267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.719 [2024-10-01 15:58:56.735455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.719 [2024-10-01 15:58:56.735467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.719 [2024-10-01 15:58:56.735475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.719 [2024-10-01 15:58:56.735696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.719 [2024-10-01 15:58:56.735707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.719 [2024-10-01 15:58:56.735713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.719 [2024-10-01 15:58:56.735725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.719 [2024-10-01 15:58:56.735748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.719 [2024-10-01 15:58:56.735765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.719 [2024-10-01 15:58:56.735772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.719 [2024-10-01 15:58:56.735779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.719 [2024-10-01 15:58:56.735787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.719 [2024-10-01 15:58:56.735793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.719 [2024-10-01 15:58:56.735799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.719 [2024-10-01 15:58:56.735812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.719 [2024-10-01 15:58:56.735819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.719 [2024-10-01 15:58:56.748026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.719 [2024-10-01 15:58:56.748047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.719 [2024-10-01 15:58:56.748214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.719 [2024-10-01 15:58:56.748226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.719 [2024-10-01 15:58:56.748234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.719 [2024-10-01 15:58:56.748385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.719 [2024-10-01 15:58:56.748395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.719 [2024-10-01 15:58:56.748402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.719 [2024-10-01 15:58:56.748413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.719 [2024-10-01 15:58:56.748422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.719 [2024-10-01 15:58:56.748432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.719 [2024-10-01 15:58:56.748438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.719 [2024-10-01 15:58:56.748445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.719 [2024-10-01 15:58:56.748453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.719 [2024-10-01 15:58:56.748459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.719 [2024-10-01 15:58:56.748465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.719 [2024-10-01 15:58:56.748479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.719 [2024-10-01 15:58:56.748485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.719 [2024-10-01 15:58:56.760110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.719 [2024-10-01 15:58:56.760130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.719 [2024-10-01 15:58:56.760392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.719 [2024-10-01 15:58:56.760405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.719 [2024-10-01 15:58:56.760416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.719 [2024-10-01 15:58:56.760557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.719 [2024-10-01 15:58:56.760566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.719 [2024-10-01 15:58:56.760573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.719 [2024-10-01 15:58:56.761034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.719 [2024-10-01 15:58:56.761048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.719 [2024-10-01 15:58:56.761216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.719 [2024-10-01 15:58:56.761226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.719 [2024-10-01 15:58:56.761233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.719 [2024-10-01 15:58:56.761243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.719 [2024-10-01 15:58:56.761249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.719 [2024-10-01 15:58:56.761255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.719 [2024-10-01 15:58:56.761286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.719 [2024-10-01 15:58:56.761293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.719 [2024-10-01 15:58:56.770932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.719 [2024-10-01 15:58:56.770952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.719 [2024-10-01 15:58:56.771116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.719 [2024-10-01 15:58:56.771129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.719 [2024-10-01 15:58:56.771136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.719 [2024-10-01 15:58:56.771354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.719 [2024-10-01 15:58:56.771364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.719 [2024-10-01 15:58:56.771371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.719 [2024-10-01 15:58:56.771382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.719 [2024-10-01 15:58:56.771391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.719 [2024-10-01 15:58:56.771401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.719 [2024-10-01 15:58:56.771407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.719 [2024-10-01 15:58:56.771414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.719 [2024-10-01 15:58:56.771422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.719 [2024-10-01 15:58:56.771428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.719 [2024-10-01 15:58:56.771434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.719 [2024-10-01 15:58:56.771450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.719 [2024-10-01 15:58:56.771457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.719 [2024-10-01 15:58:56.783570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.719 [2024-10-01 15:58:56.783592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.719 [2024-10-01 15:58:56.783827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.719 [2024-10-01 15:58:56.783840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.719 [2024-10-01 15:58:56.783848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.719 [2024-10-01 15:58:56.784081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.719 [2024-10-01 15:58:56.784092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.719 [2024-10-01 15:58:56.784098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.719 [2024-10-01 15:58:56.784111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.719 [2024-10-01 15:58:56.784120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.719 [2024-10-01 15:58:56.784130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.719 [2024-10-01 15:58:56.784136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.719 [2024-10-01 15:58:56.784142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.719 [2024-10-01 15:58:56.784150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.719 [2024-10-01 15:58:56.784156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.719 [2024-10-01 15:58:56.784162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.719 [2024-10-01 15:58:56.784176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.719 [2024-10-01 15:58:56.784183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.719 [2024-10-01 15:58:56.795501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.719 [2024-10-01 15:58:56.795522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.719 [2024-10-01 15:58:56.795772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.719 [2024-10-01 15:58:56.795786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.719 [2024-10-01 15:58:56.795794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.719 [2024-10-01 15:58:56.795988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.719 [2024-10-01 15:58:56.796000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.720 [2024-10-01 15:58:56.796007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.720 [2024-10-01 15:58:56.796199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.720 [2024-10-01 15:58:56.796212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.720 [2024-10-01 15:58:56.796302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.720 [2024-10-01 15:58:56.796311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.720 [2024-10-01 15:58:56.796317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.720 [2024-10-01 15:58:56.796326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.720 [2024-10-01 15:58:56.796332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.720 [2024-10-01 15:58:56.796338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.720 [2024-10-01 15:58:56.796351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.720 [2024-10-01 15:58:56.796358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.720 [2024-10-01 15:58:56.806078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.720 [2024-10-01 15:58:56.806100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.720 [2024-10-01 15:58:56.806264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.720 [2024-10-01 15:58:56.806277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.720 [2024-10-01 15:58:56.806285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.720 [2024-10-01 15:58:56.806478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.720 [2024-10-01 15:58:56.806488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.720 [2024-10-01 15:58:56.806495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.720 [2024-10-01 15:58:56.806506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.720 [2024-10-01 15:58:56.806516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.720 [2024-10-01 15:58:56.806526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.720 [2024-10-01 15:58:56.806532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.720 [2024-10-01 15:58:56.806538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.720 [2024-10-01 15:58:56.806547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.720 [2024-10-01 15:58:56.806552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.720 [2024-10-01 15:58:56.806559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.720 [2024-10-01 15:58:56.806572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.720 [2024-10-01 15:58:56.806579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.720 [2024-10-01 15:58:56.816335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.720 [2024-10-01 15:58:56.816356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.720 [2024-10-01 15:58:56.816578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.720 [2024-10-01 15:58:56.816591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.720 [2024-10-01 15:58:56.816598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.720 [2024-10-01 15:58:56.816820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.720 [2024-10-01 15:58:56.816831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.720 [2024-10-01 15:58:56.816838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.720 [2024-10-01 15:58:56.816849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.720 [2024-10-01 15:58:56.816858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.720 [2024-10-01 15:58:56.816874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.720 [2024-10-01 15:58:56.816880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.720 [2024-10-01 15:58:56.816886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.720 [2024-10-01 15:58:56.816895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.720 [2024-10-01 15:58:56.816900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.720 [2024-10-01 15:58:56.816906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.720 [2024-10-01 15:58:56.816919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.720 [2024-10-01 15:58:56.816926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.720 [2024-10-01 15:58:56.826944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.720 [2024-10-01 15:58:56.826967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.720 [2024-10-01 15:58:56.827177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.720 [2024-10-01 15:58:56.827191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.720 [2024-10-01 15:58:56.827198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.720 [2024-10-01 15:58:56.827275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.720 [2024-10-01 15:58:56.827284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.720 [2024-10-01 15:58:56.827291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.720 [2024-10-01 15:58:56.827303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.720 [2024-10-01 15:58:56.827313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.720 [2024-10-01 15:58:56.827322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.720 [2024-10-01 15:58:56.827329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.720 [2024-10-01 15:58:56.827335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.720 [2024-10-01 15:58:56.827344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.720 [2024-10-01 15:58:56.827350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.720 [2024-10-01 15:58:56.827356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.720 [2024-10-01 15:58:56.827496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.720 [2024-10-01 15:58:56.827510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.720 [2024-10-01 15:58:56.838113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.720 [2024-10-01 15:58:56.838134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.720 [2024-10-01 15:58:56.838302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.720 [2024-10-01 15:58:56.838316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.720 [2024-10-01 15:58:56.838323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.720 [2024-10-01 15:58:56.838448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.720 [2024-10-01 15:58:56.838458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.720 [2024-10-01 15:58:56.838464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.720 [2024-10-01 15:58:56.838803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.720 [2024-10-01 15:58:56.838816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.720 [2024-10-01 15:58:56.838981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.720 [2024-10-01 15:58:56.838991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.720 [2024-10-01 15:58:56.838998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.720 [2024-10-01 15:58:56.839008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.720 [2024-10-01 15:58:56.839014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.720 [2024-10-01 15:58:56.839020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.720 [2024-10-01 15:58:56.839194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.720 [2024-10-01 15:58:56.839205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.720 [2024-10-01 15:58:56.849988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.720 [2024-10-01 15:58:56.850010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.720 [2024-10-01 15:58:56.850409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.720 [2024-10-01 15:58:56.850425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.720 [2024-10-01 15:58:56.850433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.720 [2024-10-01 15:58:56.850626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.720 [2024-10-01 15:58:56.850637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.720 [2024-10-01 15:58:56.850644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.720 [2024-10-01 15:58:56.850901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.720 [2024-10-01 15:58:56.850915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.720 [2024-10-01 15:58:56.851063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.720 [2024-10-01 15:58:56.851078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.720 [2024-10-01 15:58:56.851084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.720 [2024-10-01 15:58:56.851093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.720 [2024-10-01 15:58:56.851099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.720 [2024-10-01 15:58:56.851105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.720 [2024-10-01 15:58:56.851135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.720 [2024-10-01 15:58:56.851142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.720 [2024-10-01 15:58:56.862478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.720 [2024-10-01 15:58:56.862499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.720 [2024-10-01 15:58:56.862725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.720 [2024-10-01 15:58:56.862738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.720 [2024-10-01 15:58:56.862745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.720 [2024-10-01 15:58:56.862960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.720 [2024-10-01 15:58:56.862972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.720 [2024-10-01 15:58:56.862978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.720 [2024-10-01 15:58:56.862999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.720 [2024-10-01 15:58:56.863009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.721 [2024-10-01 15:58:56.863018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-10-01 15:58:56.863024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.721 [2024-10-01 15:58:56.863031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.721 [2024-10-01 15:58:56.863039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-10-01 15:58:56.863045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.721 [2024-10-01 15:58:56.863051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.721 [2024-10-01 15:58:56.863065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.721 [2024-10-01 15:58:56.863072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.721 [2024-10-01 15:58:56.874227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-10-01 15:58:56.874249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-10-01 15:58:56.874623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-10-01 15:58:56.874639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.721 [2024-10-01 15:58:56.874647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.721 [2024-10-01 15:58:56.874844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-10-01 15:58:56.874858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.721 [2024-10-01 15:58:56.874870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.721 [2024-10-01 15:58:56.875046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.721 [2024-10-01 15:58:56.875060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.721 [2024-10-01 15:58:56.875201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-10-01 15:58:56.875211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.721 [2024-10-01 15:58:56.875217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.721 [2024-10-01 15:58:56.875227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-10-01 15:58:56.875233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.721 [2024-10-01 15:58:56.875239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.721 [2024-10-01 15:58:56.875270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.721 [2024-10-01 15:58:56.875277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.721 [2024-10-01 15:58:56.884791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-10-01 15:58:56.884811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-10-01 15:58:56.884985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-10-01 15:58:56.884998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.721 [2024-10-01 15:58:56.885006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.721 [2024-10-01 15:58:56.885136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-10-01 15:58:56.885146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.721 [2024-10-01 15:58:56.885152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.721 [2024-10-01 15:58:56.885164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.721 [2024-10-01 15:58:56.885172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.721 [2024-10-01 15:58:56.885182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-10-01 15:58:56.885189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.721 [2024-10-01 15:58:56.885195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.721 [2024-10-01 15:58:56.885203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-10-01 15:58:56.885209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.721 [2024-10-01 15:58:56.885215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.721 [2024-10-01 15:58:56.885228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.721 [2024-10-01 15:58:56.885235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.721 [2024-10-01 15:58:56.896676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-10-01 15:58:56.896698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-10-01 15:58:56.897040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-10-01 15:58:56.897057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.721 [2024-10-01 15:58:56.897064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.721 [2024-10-01 15:58:56.897258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-10-01 15:58:56.897268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.721 [2024-10-01 15:58:56.897275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.721 [2024-10-01 15:58:56.897527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.721 [2024-10-01 15:58:56.897540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.721 [2024-10-01 15:58:56.897688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-10-01 15:58:56.897698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.721 [2024-10-01 15:58:56.897705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.721 [2024-10-01 15:58:56.897714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-10-01 15:58:56.897720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.721 [2024-10-01 15:58:56.897726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.721 [2024-10-01 15:58:56.897755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.721 [2024-10-01 15:58:56.897763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.721 [2024-10-01 15:58:56.908550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-10-01 15:58:56.908571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-10-01 15:58:56.908732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-10-01 15:58:56.908745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.721 [2024-10-01 15:58:56.908752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.721 [2024-10-01 15:58:56.908919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-10-01 15:58:56.908930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.721 [2024-10-01 15:58:56.908937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.721 [2024-10-01 15:58:56.908949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.721 [2024-10-01 15:58:56.908959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.721 [2024-10-01 15:58:56.908969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-10-01 15:58:56.908976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.721 [2024-10-01 15:58:56.908985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.721 [2024-10-01 15:58:56.908994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-10-01 15:58:56.909000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.721 [2024-10-01 15:58:56.909006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.721 [2024-10-01 15:58:56.909019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.721 [2024-10-01 15:58:56.909026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.721 [2024-10-01 15:58:56.921600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-10-01 15:58:56.921622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-10-01 15:58:56.921780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-10-01 15:58:56.921793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.721 [2024-10-01 15:58:56.921800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.721 [2024-10-01 15:58:56.921923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-10-01 15:58:56.921933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.721 [2024-10-01 15:58:56.921940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.721 [2024-10-01 15:58:56.921952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.721 [2024-10-01 15:58:56.921961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.721 [2024-10-01 15:58:56.921971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-10-01 15:58:56.921977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.721 [2024-10-01 15:58:56.921983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.721 [2024-10-01 15:58:56.921991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-10-01 15:58:56.921997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.721 [2024-10-01 15:58:56.922004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.721 [2024-10-01 15:58:56.922018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.721 [2024-10-01 15:58:56.922024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.721 [2024-10-01 15:58:56.932393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-10-01 15:58:56.932414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-10-01 15:58:56.932603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-10-01 15:58:56.932616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.721 [2024-10-01 15:58:56.932624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.721 [2024-10-01 15:58:56.932814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-10-01 15:58:56.932824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.721 [2024-10-01 15:58:56.932834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.721 [2024-10-01 15:58:56.933062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.721 [2024-10-01 15:58:56.933076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.721 [2024-10-01 15:58:56.933220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-10-01 15:58:56.933231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.721 [2024-10-01 15:58:56.933237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.721 [2024-10-01 15:58:56.933246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.721 [2024-10-01 15:58:56.933253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.721 [2024-10-01 15:58:56.933259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.721 [2024-10-01 15:58:56.933400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.721 [2024-10-01 15:58:56.933410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.721 [2024-10-01 15:58:56.943463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-10-01 15:58:56.943483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.721 [2024-10-01 15:58:56.943653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.721 [2024-10-01 15:58:56.943665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.721 [2024-10-01 15:58:56.943673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.721 [2024-10-01 15:58:56.943813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.722 [2024-10-01 15:58:56.943823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.722 [2024-10-01 15:58:56.943830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.722 [2024-10-01 15:58:56.944176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.722 [2024-10-01 15:58:56.944190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.722 [2024-10-01 15:58:56.944348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.722 [2024-10-01 15:58:56.944358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.722 [2024-10-01 15:58:56.944364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.722 [2024-10-01 15:58:56.944374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.722 [2024-10-01 15:58:56.944380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.722 [2024-10-01 15:58:56.944386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.722 [2024-10-01 15:58:56.944559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.722 [2024-10-01 15:58:56.944568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.722 [2024-10-01 15:58:56.954807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.722 [2024-10-01 15:58:56.954832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.722 [2024-10-01 15:58:56.955071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.722 [2024-10-01 15:58:56.955084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.722 [2024-10-01 15:58:56.955092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.722 [2024-10-01 15:58:56.955231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.722 [2024-10-01 15:58:56.955240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.722 [2024-10-01 15:58:56.955247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.722 [2024-10-01 15:58:56.955692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.722 [2024-10-01 15:58:56.955706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.722 [2024-10-01 15:58:56.955910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.722 [2024-10-01 15:58:56.955922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.722 [2024-10-01 15:58:56.955929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.722 [2024-10-01 15:58:56.955938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.722 [2024-10-01 15:58:56.955944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.722 [2024-10-01 15:58:56.955950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.722 [2024-10-01 15:58:56.955983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.722 [2024-10-01 15:58:56.955991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.722 [2024-10-01 15:58:56.966486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.722 [2024-10-01 15:58:56.966506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.722 [2024-10-01 15:58:56.966769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.722 [2024-10-01 15:58:56.966786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.722 [2024-10-01 15:58:56.966793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.722 [2024-10-01 15:58:56.966941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.722 [2024-10-01 15:58:56.966951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.722 [2024-10-01 15:58:56.966958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.722 [2024-10-01 15:58:56.967133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.722 [2024-10-01 15:58:56.967146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.722 [2024-10-01 15:58:56.967174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.722 [2024-10-01 15:58:56.967182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.722 [2024-10-01 15:58:56.967188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.722 [2024-10-01 15:58:56.967197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.722 [2024-10-01 15:58:56.967209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.722 [2024-10-01 15:58:56.967215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.722 [2024-10-01 15:58:56.967344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.722 [2024-10-01 15:58:56.967353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.722 11338.00 IOPS, 44.29 MiB/s [2024-10-01 15:58:56.978797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.722 [2024-10-01 15:58:56.978815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.722 [2024-10-01 15:58:56.978977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.722 [2024-10-01 15:58:56.978990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.722 [2024-10-01 15:58:56.978997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.722 [2024-10-01 15:58:56.979194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.722 [2024-10-01 15:58:56.979205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.722 [2024-10-01 15:58:56.979211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.722 [2024-10-01 15:58:56.980098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.722 [2024-10-01 15:58:56.980113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.722 [2024-10-01 15:58:56.980205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.722 [2024-10-01 15:58:56.980212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.722 [2024-10-01 15:58:56.980219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.722 [2024-10-01 15:58:56.980228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.722 [2024-10-01 15:58:56.980234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.722 [2024-10-01 15:58:56.980240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.722 [2024-10-01 15:58:56.980254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.722 [2024-10-01 15:58:56.980260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.722 [2024-10-01 15:58:56.988994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.722 [2024-10-01 15:58:56.989043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.722 [2024-10-01 15:58:56.989277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.722 [2024-10-01 15:58:56.989290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.722 [2024-10-01 15:58:56.989298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.722 [2024-10-01 15:58:56.989511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.722 [2024-10-01 15:58:56.989524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.722 [2024-10-01 15:58:56.989531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.722 [2024-10-01 15:58:56.989544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.722 [2024-10-01 15:58:56.989688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.722 [2024-10-01 15:58:56.989698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.722 [2024-10-01 15:58:56.989704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.722 [2024-10-01 15:58:56.989711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.723 [2024-10-01 15:58:56.989740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.723 [2024-10-01 15:58:56.989748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.723 [2024-10-01 15:58:56.989753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.723 [2024-10-01 15:58:56.989760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.723 [2024-10-01 15:58:56.989772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.723 [2024-10-01 15:58:57.001526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.723 [2024-10-01 15:58:57.001548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.723 [2024-10-01 15:58:57.001912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-10-01 15:58:57.001928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.723 [2024-10-01 15:58:57.001936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.723 [2024-10-01 15:58:57.002130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-10-01 15:58:57.002140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.723 [2024-10-01 15:58:57.002147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.723 [2024-10-01 15:58:57.002398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.723 [2024-10-01 15:58:57.002412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.723 [2024-10-01 15:58:57.002560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.723 [2024-10-01 15:58:57.002570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.723 [2024-10-01 15:58:57.002577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.723 [2024-10-01 15:58:57.002586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.723 [2024-10-01 15:58:57.002592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.723 [2024-10-01 15:58:57.002598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.723 [2024-10-01 15:58:57.002624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.723 [2024-10-01 15:58:57.002631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.723 [2024-10-01 15:58:57.012442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.723 [2024-10-01 15:58:57.012462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.723 [2024-10-01 15:58:57.012621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-10-01 15:58:57.012633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.723 [2024-10-01 15:58:57.012640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.723 [2024-10-01 15:58:57.012717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-10-01 15:58:57.012727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.723 [2024-10-01 15:58:57.012734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.723 [2024-10-01 15:58:57.012745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.723 [2024-10-01 15:58:57.012754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.723 [2024-10-01 15:58:57.012764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.723 [2024-10-01 15:58:57.012770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.723 [2024-10-01 15:58:57.012777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.723 [2024-10-01 15:58:57.012786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.723 [2024-10-01 15:58:57.012792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.723 [2024-10-01 15:58:57.012798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.723 [2024-10-01 15:58:57.012811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.723 [2024-10-01 15:58:57.012818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.723 [2024-10-01 15:58:57.025218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.723 [2024-10-01 15:58:57.025239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.723 [2024-10-01 15:58:57.025403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-10-01 15:58:57.025416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.723 [2024-10-01 15:58:57.025423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.723 [2024-10-01 15:58:57.025505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-10-01 15:58:57.025515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.723 [2024-10-01 15:58:57.025521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.723 [2024-10-01 15:58:57.025534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.723 [2024-10-01 15:58:57.025543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.723 [2024-10-01 15:58:57.025552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.723 [2024-10-01 15:58:57.025558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.723 [2024-10-01 15:58:57.025565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.723 [2024-10-01 15:58:57.025574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.723 [2024-10-01 15:58:57.025583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.723 [2024-10-01 15:58:57.025589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.723 [2024-10-01 15:58:57.025602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.723 [2024-10-01 15:58:57.025609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.723 [2024-10-01 15:58:57.037438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.723 [2024-10-01 15:58:57.037460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.723 [2024-10-01 15:58:57.037668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-10-01 15:58:57.037681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.723 [2024-10-01 15:58:57.037688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.723 [2024-10-01 15:58:57.037859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-10-01 15:58:57.037876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.723 [2024-10-01 15:58:57.037883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.723 [2024-10-01 15:58:57.038144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.723 [2024-10-01 15:58:57.038158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.723 [2024-10-01 15:58:57.038201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.723 [2024-10-01 15:58:57.038210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.723 [2024-10-01 15:58:57.038216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.723 [2024-10-01 15:58:57.038225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.723 [2024-10-01 15:58:57.038231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.723 [2024-10-01 15:58:57.038238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.723 [2024-10-01 15:58:57.038426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.723 [2024-10-01 15:58:57.038436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.723 [2024-10-01 15:58:57.048374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.723 [2024-10-01 15:58:57.048394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.723 [2024-10-01 15:58:57.048548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-10-01 15:58:57.048561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.723 [2024-10-01 15:58:57.048568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.723 [2024-10-01 15:58:57.048714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-10-01 15:58:57.048724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.723 [2024-10-01 15:58:57.048731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.723 [2024-10-01 15:58:57.048742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.723 [2024-10-01 15:58:57.048754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.723 [2024-10-01 15:58:57.048764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.723 [2024-10-01 15:58:57.048770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.723 [2024-10-01 15:58:57.048776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.723 [2024-10-01 15:58:57.048784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.723 [2024-10-01 15:58:57.048790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.723 [2024-10-01 15:58:57.048797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.723 [2024-10-01 15:58:57.048810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.723 [2024-10-01 15:58:57.048816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.723 [2024-10-01 15:58:57.059330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.723 [2024-10-01 15:58:57.059350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.723 [2024-10-01 15:58:57.059580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-10-01 15:58:57.059591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.723 [2024-10-01 15:58:57.059599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.723 [2024-10-01 15:58:57.059686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-10-01 15:58:57.059696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.723 [2024-10-01 15:58:57.059702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.723 [2024-10-01 15:58:57.059713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.723 [2024-10-01 15:58:57.059723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.723 [2024-10-01 15:58:57.059733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.723 [2024-10-01 15:58:57.059739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.723 [2024-10-01 15:58:57.059745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.723 [2024-10-01 15:58:57.059754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.723 [2024-10-01 15:58:57.059759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.723 [2024-10-01 15:58:57.059766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.723 [2024-10-01 15:58:57.059779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.723 [2024-10-01 15:58:57.059785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.723 [2024-10-01 15:58:57.071225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.723 [2024-10-01 15:58:57.071246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.723 [2024-10-01 15:58:57.071484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-10-01 15:58:57.071497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.723 [2024-10-01 15:58:57.071509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.723 [2024-10-01 15:58:57.071597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.723 [2024-10-01 15:58:57.071606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.724 [2024-10-01 15:58:57.071613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.724 [2024-10-01 15:58:57.071624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.724 [2024-10-01 15:58:57.071633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.724 [2024-10-01 15:58:57.071643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.724 [2024-10-01 15:58:57.071649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.724 [2024-10-01 15:58:57.071656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.724 [2024-10-01 15:58:57.071664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.724 [2024-10-01 15:58:57.071670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.724 [2024-10-01 15:58:57.071676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.724 [2024-10-01 15:58:57.071689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.724 [2024-10-01 15:58:57.071696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.724 [2024-10-01 15:58:57.083080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.724 [2024-10-01 15:58:57.083102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.724 [2024-10-01 15:58:57.083426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-10-01 15:58:57.083442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.724 [2024-10-01 15:58:57.083450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.724 [2024-10-01 15:58:57.083671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-10-01 15:58:57.083682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.724 [2024-10-01 15:58:57.083689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.724 [2024-10-01 15:58:57.084045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.724 [2024-10-01 15:58:57.084061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.724 [2024-10-01 15:58:57.084213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.724 [2024-10-01 15:58:57.084224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.724 [2024-10-01 15:58:57.084230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.724 [2024-10-01 15:58:57.084240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.724 [2024-10-01 15:58:57.084246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.724 [2024-10-01 15:58:57.084256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.724 [2024-10-01 15:58:57.084398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.724 [2024-10-01 15:58:57.084408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.724 [2024-10-01 15:58:57.093984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.724 [2024-10-01 15:58:57.094005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.724 [2024-10-01 15:58:57.094169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-10-01 15:58:57.094181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.724 [2024-10-01 15:58:57.094188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.724 [2024-10-01 15:58:57.094382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-10-01 15:58:57.094391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.724 [2024-10-01 15:58:57.094398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.724 [2024-10-01 15:58:57.094652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.724 [2024-10-01 15:58:57.094665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.724 [2024-10-01 15:58:57.094908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.724 [2024-10-01 15:58:57.094919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.724 [2024-10-01 15:58:57.094926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.724 [2024-10-01 15:58:57.094935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.724 [2024-10-01 15:58:57.094940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.724 [2024-10-01 15:58:57.094947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.724 [2024-10-01 15:58:57.095097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.724 [2024-10-01 15:58:57.095107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.724 [2024-10-01 15:58:57.104884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.724 [2024-10-01 15:58:57.104904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.724 [2024-10-01 15:58:57.105092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-10-01 15:58:57.105104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.724 [2024-10-01 15:58:57.105111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.724 [2024-10-01 15:58:57.105273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-10-01 15:58:57.105283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.724 [2024-10-01 15:58:57.105289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.724 [2024-10-01 15:58:57.105301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.724 [2024-10-01 15:58:57.105310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.724 [2024-10-01 15:58:57.105323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.724 [2024-10-01 15:58:57.105329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.724 [2024-10-01 15:58:57.105335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.724 [2024-10-01 15:58:57.105344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.724 [2024-10-01 15:58:57.105350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.724 [2024-10-01 15:58:57.105355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.724 [2024-10-01 15:58:57.105368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.724 [2024-10-01 15:58:57.105375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.724 [2024-10-01 15:58:57.116656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.724 [2024-10-01 15:58:57.116678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.724 [2024-10-01 15:58:57.117027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-10-01 15:58:57.117044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.724 [2024-10-01 15:58:57.117052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.724 [2024-10-01 15:58:57.117217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-10-01 15:58:57.117227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.724 [2024-10-01 15:58:57.117234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.724 [2024-10-01 15:58:57.117380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.724 [2024-10-01 15:58:57.117393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.724 [2024-10-01 15:58:57.117425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.724 [2024-10-01 15:58:57.117433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.724 [2024-10-01 15:58:57.117440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.724 [2024-10-01 15:58:57.117449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.724 [2024-10-01 15:58:57.117456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.724 [2024-10-01 15:58:57.117462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.724 [2024-10-01 15:58:57.117476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.724 [2024-10-01 15:58:57.117482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.724 [2024-10-01 15:58:57.128426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.724 [2024-10-01 15:58:57.128447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.724 [2024-10-01 15:58:57.128761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-10-01 15:58:57.128776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.724 [2024-10-01 15:58:57.128784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.724 [2024-10-01 15:58:57.128931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.724 [2024-10-01 15:58:57.128941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.724 [2024-10-01 15:58:57.128947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.724 [2024-10-01 15:58:57.129298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.724 [2024-10-01 15:58:57.129313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.725 [2024-10-01 15:58:57.129352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.725 [2024-10-01 15:58:57.129360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.725 [2024-10-01 15:58:57.129366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.725 [2024-10-01 15:58:57.129374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.725 [2024-10-01 15:58:57.129380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.725 [2024-10-01 15:58:57.129387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.725 [2024-10-01 15:58:57.129401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.725 [2024-10-01 15:58:57.129407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.725 [2024-10-01 15:58:57.139404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.725 [2024-10-01 15:58:57.139425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.725 [2024-10-01 15:58:57.139728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-10-01 15:58:57.139744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.725 [2024-10-01 15:58:57.139752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.725 [2024-10-01 15:58:57.139971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-10-01 15:58:57.139983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.725 [2024-10-01 15:58:57.139990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.725 [2024-10-01 15:58:57.140195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.725 [2024-10-01 15:58:57.140210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.725 [2024-10-01 15:58:57.140352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.725 [2024-10-01 15:58:57.140362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.725 [2024-10-01 15:58:57.140369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.725 [2024-10-01 15:58:57.140378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.725 [2024-10-01 15:58:57.140385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.725 [2024-10-01 15:58:57.140391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.725 [2024-10-01 15:58:57.140534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.725 [2024-10-01 15:58:57.140547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.725 [2024-10-01 15:58:57.150512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.725 [2024-10-01 15:58:57.150533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.725 [2024-10-01 15:58:57.150718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-10-01 15:58:57.150730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.725 [2024-10-01 15:58:57.150738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.725 [2024-10-01 15:58:57.150929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-10-01 15:58:57.150939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.725 [2024-10-01 15:58:57.150946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.725 [2024-10-01 15:58:57.151285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.725 [2024-10-01 15:58:57.151298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.725 [2024-10-01 15:58:57.151456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.725 [2024-10-01 15:58:57.151466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.725 [2024-10-01 15:58:57.151473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.725 [2024-10-01 15:58:57.151482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.725 [2024-10-01 15:58:57.151489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.725 [2024-10-01 15:58:57.151495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.725 [2024-10-01 15:58:57.151669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.725 [2024-10-01 15:58:57.151679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.725 [2024-10-01 15:58:57.161860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.725 [2024-10-01 15:58:57.161885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.725 [2024-10-01 15:58:57.162037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-10-01 15:58:57.162049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.725 [2024-10-01 15:58:57.162056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.725 [2024-10-01 15:58:57.162273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-10-01 15:58:57.162283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.725 [2024-10-01 15:58:57.162291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.725 [2024-10-01 15:58:57.162739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.725 [2024-10-01 15:58:57.162752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.725 [2024-10-01 15:58:57.162927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.725 [2024-10-01 15:58:57.162941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.725 [2024-10-01 15:58:57.162948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.725 [2024-10-01 15:58:57.162957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.725 [2024-10-01 15:58:57.162963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.725 [2024-10-01 15:58:57.162969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.725 [2024-10-01 15:58:57.163000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.725 [2024-10-01 15:58:57.163007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.725 [2024-10-01 15:58:57.172573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.725 [2024-10-01 15:58:57.172595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.725 [2024-10-01 15:58:57.172756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-10-01 15:58:57.172768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.725 [2024-10-01 15:58:57.172775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.725 [2024-10-01 15:58:57.172994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-10-01 15:58:57.173004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.725 [2024-10-01 15:58:57.173011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.725 [2024-10-01 15:58:57.173023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.725 [2024-10-01 15:58:57.173032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.725 [2024-10-01 15:58:57.173042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.725 [2024-10-01 15:58:57.173048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.725 [2024-10-01 15:58:57.173055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.725 [2024-10-01 15:58:57.173063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.725 [2024-10-01 15:58:57.173069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.725 [2024-10-01 15:58:57.173075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.725 [2024-10-01 15:58:57.173089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.725 [2024-10-01 15:58:57.173096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.725 [2024-10-01 15:58:57.185428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.725 [2024-10-01 15:58:57.185449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.725 [2024-10-01 15:58:57.185761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-10-01 15:58:57.185777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.725 [2024-10-01 15:58:57.185784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.725 [2024-10-01 15:58:57.186015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.725 [2024-10-01 15:58:57.186030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.725 [2024-10-01 15:58:57.186037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.725 [2024-10-01 15:58:57.186389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.725 [2024-10-01 15:58:57.186404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.725 [2024-10-01 15:58:57.186558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.725 [2024-10-01 15:58:57.186568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.726 [2024-10-01 15:58:57.186575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.726 [2024-10-01 15:58:57.186584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.726 [2024-10-01 15:58:57.186590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.726 [2024-10-01 15:58:57.186596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.726 [2024-10-01 15:58:57.186738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.726 [2024-10-01 15:58:57.186747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.726 [2024-10-01 15:58:57.196107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.726 [2024-10-01 15:58:57.196128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.726 [2024-10-01 15:58:57.196735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-10-01 15:58:57.196753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.726 [2024-10-01 15:58:57.196761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.726 [2024-10-01 15:58:57.196900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-10-01 15:58:57.196911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.726 [2024-10-01 15:58:57.196918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.726 [2024-10-01 15:58:57.197186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.726 [2024-10-01 15:58:57.197200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.726 [2024-10-01 15:58:57.197403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.726 [2024-10-01 15:58:57.197413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.726 [2024-10-01 15:58:57.197420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.726 [2024-10-01 15:58:57.197429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.726 [2024-10-01 15:58:57.197435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.726 [2024-10-01 15:58:57.197442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.726 [2024-10-01 15:58:57.197472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.726 [2024-10-01 15:58:57.197479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.726 [2024-10-01 15:58:57.207305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.726 [2024-10-01 15:58:57.207326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.726 [2024-10-01 15:58:57.207651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-10-01 15:58:57.207667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.726 [2024-10-01 15:58:57.207675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.726 [2024-10-01 15:58:57.207887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-10-01 15:58:57.207899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.726 [2024-10-01 15:58:57.207906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.726 [2024-10-01 15:58:57.208112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.726 [2024-10-01 15:58:57.208127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.726 [2024-10-01 15:58:57.208269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.726 [2024-10-01 15:58:57.208280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.726 [2024-10-01 15:58:57.208287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.726 [2024-10-01 15:58:57.208297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.726 [2024-10-01 15:58:57.208304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.726 [2024-10-01 15:58:57.208311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.726 [2024-10-01 15:58:57.208455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.726 [2024-10-01 15:58:57.208465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.726 [2024-10-01 15:58:57.218771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.726 [2024-10-01 15:58:57.218792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.726 [2024-10-01 15:58:57.219161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-10-01 15:58:57.219178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.726 [2024-10-01 15:58:57.219185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.726 [2024-10-01 15:58:57.219340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-10-01 15:58:57.219350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.726 [2024-10-01 15:58:57.219356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.726 [2024-10-01 15:58:57.219637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.726 [2024-10-01 15:58:57.219652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.726 [2024-10-01 15:58:57.219691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.726 [2024-10-01 15:58:57.219698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.726 [2024-10-01 15:58:57.219707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.726 [2024-10-01 15:58:57.219717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.726 [2024-10-01 15:58:57.219723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.726 [2024-10-01 15:58:57.219729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.726 [2024-10-01 15:58:57.219858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.726 [2024-10-01 15:58:57.219874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.726 [2024-10-01 15:58:57.229487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.726 [2024-10-01 15:58:57.229509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.726 [2024-10-01 15:58:57.229799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-10-01 15:58:57.229815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.726 [2024-10-01 15:58:57.229823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.726 [2024-10-01 15:58:57.229992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-10-01 15:58:57.230004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.726 [2024-10-01 15:58:57.230011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.726 [2024-10-01 15:58:57.230155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.726 [2024-10-01 15:58:57.230168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.726 [2024-10-01 15:58:57.230305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.726 [2024-10-01 15:58:57.230315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.726 [2024-10-01 15:58:57.230322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.726 [2024-10-01 15:58:57.230331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.726 [2024-10-01 15:58:57.230337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.726 [2024-10-01 15:58:57.230343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.726 [2024-10-01 15:58:57.230370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.726 [2024-10-01 15:58:57.230377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.726 [2024-10-01 15:58:57.241068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.726 [2024-10-01 15:58:57.241089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.726 [2024-10-01 15:58:57.241333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-10-01 15:58:57.241346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.726 [2024-10-01 15:58:57.241353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.726 [2024-10-01 15:58:57.241502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.726 [2024-10-01 15:58:57.241512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.726 [2024-10-01 15:58:57.241522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.726 [2024-10-01 15:58:57.241533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.726 [2024-10-01 15:58:57.241543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.726 [2024-10-01 15:58:57.241560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.726 [2024-10-01 15:58:57.241567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.726 [2024-10-01 15:58:57.241573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.726 [2024-10-01 15:58:57.241582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.726 [2024-10-01 15:58:57.241588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.727 [2024-10-01 15:58:57.241594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.727 [2024-10-01 15:58:57.241607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.727 [2024-10-01 15:58:57.241614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.727 [2024-10-01 15:58:57.253169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.727 [2024-10-01 15:58:57.253192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.727 [2024-10-01 15:58:57.253368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-10-01 15:58:57.253380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.727 [2024-10-01 15:58:57.253388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.727 [2024-10-01 15:58:57.253517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-10-01 15:58:57.253527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.727 [2024-10-01 15:58:57.253534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.727 [2024-10-01 15:58:57.254410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.727 [2024-10-01 15:58:57.254425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.727 [2024-10-01 15:58:57.254958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.727 [2024-10-01 15:58:57.254972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.727 [2024-10-01 15:58:57.254978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.727 [2024-10-01 15:58:57.254988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.727 [2024-10-01 15:58:57.254994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.727 [2024-10-01 15:58:57.255001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.727 [2024-10-01 15:58:57.255194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.727 [2024-10-01 15:58:57.255204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.727 [2024-10-01 15:58:57.265500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.727 [2024-10-01 15:58:57.265530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.727 [2024-10-01 15:58:57.265859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-10-01 15:58:57.265882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.727 [2024-10-01 15:58:57.265890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.727 [2024-10-01 15:58:57.265985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-10-01 15:58:57.265994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.727 [2024-10-01 15:58:57.266002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.727 [2024-10-01 15:58:57.266146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.727 [2024-10-01 15:58:57.266158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.727 [2024-10-01 15:58:57.266296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.727 [2024-10-01 15:58:57.266306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.727 [2024-10-01 15:58:57.266312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.727 [2024-10-01 15:58:57.266321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.727 [2024-10-01 15:58:57.266327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.727 [2024-10-01 15:58:57.266333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.727 [2024-10-01 15:58:57.266363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.727 [2024-10-01 15:58:57.266370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.727 [2024-10-01 15:58:57.276593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.727 [2024-10-01 15:58:57.276615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.727 [2024-10-01 15:58:57.276833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-10-01 15:58:57.276847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.727 [2024-10-01 15:58:57.276854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.727 [2024-10-01 15:58:57.276951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-10-01 15:58:57.276962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.727 [2024-10-01 15:58:57.276968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.727 [2024-10-01 15:58:57.277099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.727 [2024-10-01 15:58:57.277111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.727 [2024-10-01 15:58:57.277249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.727 [2024-10-01 15:58:57.277259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.727 [2024-10-01 15:58:57.277266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.727 [2024-10-01 15:58:57.277279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.727 [2024-10-01 15:58:57.277285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.727 [2024-10-01 15:58:57.277291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.727 [2024-10-01 15:58:57.277321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.727 [2024-10-01 15:58:57.277329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.727 [2024-10-01 15:58:57.287724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.727 [2024-10-01 15:58:57.287746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.727 [2024-10-01 15:58:57.287907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-10-01 15:58:57.287920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.727 [2024-10-01 15:58:57.287927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.727 [2024-10-01 15:58:57.288064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-10-01 15:58:57.288074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.727 [2024-10-01 15:58:57.288081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.727 [2024-10-01 15:58:57.288093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.727 [2024-10-01 15:58:57.288102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.727 [2024-10-01 15:58:57.288112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.727 [2024-10-01 15:58:57.288118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.727 [2024-10-01 15:58:57.288124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.727 [2024-10-01 15:58:57.288133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.727 [2024-10-01 15:58:57.288139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.727 [2024-10-01 15:58:57.288145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.727 [2024-10-01 15:58:57.288158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.727 [2024-10-01 15:58:57.288165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.727 [2024-10-01 15:58:57.298536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.727 [2024-10-01 15:58:57.298558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.727 [2024-10-01 15:58:57.298688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-10-01 15:58:57.298701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.727 [2024-10-01 15:58:57.298708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.727 [2024-10-01 15:58:57.298844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.727 [2024-10-01 15:58:57.298854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.727 [2024-10-01 15:58:57.298860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.727 [2024-10-01 15:58:57.299000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.727 [2024-10-01 15:58:57.299011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.727 [2024-10-01 15:58:57.299357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.727 [2024-10-01 15:58:57.299368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.727 [2024-10-01 15:58:57.299375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.727 [2024-10-01 15:58:57.299384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.727 [2024-10-01 15:58:57.299391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.727 [2024-10-01 15:58:57.299396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.727 [2024-10-01 15:58:57.299552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.727 [2024-10-01 15:58:57.299562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.728 [2024-10-01 15:58:57.309618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.728 [2024-10-01 15:58:57.309639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.728 [2024-10-01 15:58:57.309974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-10-01 15:58:57.309992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.728 [2024-10-01 15:58:57.310000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.728 [2024-10-01 15:58:57.310218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-10-01 15:58:57.310229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.728 [2024-10-01 15:58:57.310236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.728 [2024-10-01 15:58:57.310489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.728 [2024-10-01 15:58:57.310502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.728 [2024-10-01 15:58:57.310661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.728 [2024-10-01 15:58:57.310672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.728 [2024-10-01 15:58:57.310678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.728 [2024-10-01 15:58:57.310688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.728 [2024-10-01 15:58:57.310694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.728 [2024-10-01 15:58:57.310700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.728 [2024-10-01 15:58:57.310842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.728 [2024-10-01 15:58:57.310852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.728 [2024-10-01 15:58:57.320843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.728 [2024-10-01 15:58:57.320872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.728 [2024-10-01 15:58:57.321192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-10-01 15:58:57.321209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.728 [2024-10-01 15:58:57.321216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.728 [2024-10-01 15:58:57.321352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-10-01 15:58:57.321362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.728 [2024-10-01 15:58:57.321368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.728 [2024-10-01 15:58:57.321511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.728 [2024-10-01 15:58:57.321523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.728 [2024-10-01 15:58:57.321661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.728 [2024-10-01 15:58:57.321671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.728 [2024-10-01 15:58:57.321678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.728 [2024-10-01 15:58:57.321687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.728 [2024-10-01 15:58:57.321693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.728 [2024-10-01 15:58:57.321699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.728 [2024-10-01 15:58:57.321728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.728 [2024-10-01 15:58:57.321736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.728 [2024-10-01 15:58:57.331897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.728 [2024-10-01 15:58:57.331919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.728 [2024-10-01 15:58:57.332262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-10-01 15:58:57.332278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.728 [2024-10-01 15:58:57.332285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.728 [2024-10-01 15:58:57.332444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-10-01 15:58:57.332454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.728 [2024-10-01 15:58:57.332461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.728 [2024-10-01 15:58:57.332721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.728 [2024-10-01 15:58:57.332735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.728 [2024-10-01 15:58:57.332771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.728 [2024-10-01 15:58:57.332778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.728 [2024-10-01 15:58:57.332785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.728 [2024-10-01 15:58:57.332795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.728 [2024-10-01 15:58:57.332804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.728 [2024-10-01 15:58:57.332810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.728 [2024-10-01 15:58:57.332946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.728 [2024-10-01 15:58:57.332956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.728 [2024-10-01 15:58:57.342923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.728 [2024-10-01 15:58:57.342944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.728 [2024-10-01 15:58:57.343257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-10-01 15:58:57.343273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.728 [2024-10-01 15:58:57.343280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.728 [2024-10-01 15:58:57.343433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-10-01 15:58:57.343443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.728 [2024-10-01 15:58:57.343449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.728 [2024-10-01 15:58:57.343601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.728 [2024-10-01 15:58:57.343613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.728 [2024-10-01 15:58:57.343751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.728 [2024-10-01 15:58:57.343762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.728 [2024-10-01 15:58:57.343768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.728 [2024-10-01 15:58:57.343778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.728 [2024-10-01 15:58:57.343784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.728 [2024-10-01 15:58:57.343790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.728 [2024-10-01 15:58:57.343937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.728 [2024-10-01 15:58:57.343947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.728 [2024-10-01 15:58:57.353903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.728 [2024-10-01 15:58:57.353924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.728 [2024-10-01 15:58:57.354234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-10-01 15:58:57.354251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.728 [2024-10-01 15:58:57.354258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.728 [2024-10-01 15:58:57.354492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.728 [2024-10-01 15:58:57.354503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.728 [2024-10-01 15:58:57.354510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.728 [2024-10-01 15:58:57.354666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.729 [2024-10-01 15:58:57.354684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.729 [2024-10-01 15:58:57.354823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.729 [2024-10-01 15:58:57.354833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.729 [2024-10-01 15:58:57.354839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.729 [2024-10-01 15:58:57.354849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.729 [2024-10-01 15:58:57.354855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.729 [2024-10-01 15:58:57.354861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.729 [2024-10-01 15:58:57.354897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.729 [2024-10-01 15:58:57.354904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.729 [2024-10-01 15:58:57.365404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.729 [2024-10-01 15:58:57.365426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.729 [2024-10-01 15:58:57.365631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-10-01 15:58:57.365645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.729 [2024-10-01 15:58:57.365652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.729 [2024-10-01 15:58:57.365784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-10-01 15:58:57.365794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.729 [2024-10-01 15:58:57.365801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.729 [2024-10-01 15:58:57.366005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.729 [2024-10-01 15:58:57.366020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.729 [2024-10-01 15:58:57.366113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.729 [2024-10-01 15:58:57.366121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.729 [2024-10-01 15:58:57.366128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.729 [2024-10-01 15:58:57.366136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.729 [2024-10-01 15:58:57.366142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.729 [2024-10-01 15:58:57.366148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.729 [2024-10-01 15:58:57.366169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.729 [2024-10-01 15:58:57.366176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.729 [2024-10-01 15:58:57.376570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.729 [2024-10-01 15:58:57.376592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.729 [2024-10-01 15:58:57.376721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-10-01 15:58:57.376733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.729 [2024-10-01 15:58:57.376744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.729 [2024-10-01 15:58:57.376916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-10-01 15:58:57.376927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.729 [2024-10-01 15:58:57.376933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.729 [2024-10-01 15:58:57.377127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.729 [2024-10-01 15:58:57.377140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.729 [2024-10-01 15:58:57.377234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.729 [2024-10-01 15:58:57.377242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.729 [2024-10-01 15:58:57.377249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.729 [2024-10-01 15:58:57.377258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.729 [2024-10-01 15:58:57.377264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.729 [2024-10-01 15:58:57.377270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.729 [2024-10-01 15:58:57.377289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.729 [2024-10-01 15:58:57.377297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.729 [2024-10-01 15:58:57.387282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.729 [2024-10-01 15:58:57.387305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.729 [2024-10-01 15:58:57.387480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-10-01 15:58:57.387493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.729 [2024-10-01 15:58:57.387501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.729 [2024-10-01 15:58:57.387586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-10-01 15:58:57.387595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.729 [2024-10-01 15:58:57.387602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.729 [2024-10-01 15:58:57.387733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.729 [2024-10-01 15:58:57.387745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.729 [2024-10-01 15:58:57.387890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.729 [2024-10-01 15:58:57.387899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.729 [2024-10-01 15:58:57.387906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.729 [2024-10-01 15:58:57.387915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.729 [2024-10-01 15:58:57.387921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.729 [2024-10-01 15:58:57.387932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.729 [2024-10-01 15:58:57.387962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.729 [2024-10-01 15:58:57.387970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.729 [2024-10-01 15:58:57.397837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.729 [2024-10-01 15:58:57.397859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.729 [2024-10-01 15:58:57.398064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-10-01 15:58:57.398078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.729 [2024-10-01 15:58:57.398086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.729 [2024-10-01 15:58:57.398162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-10-01 15:58:57.398172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.729 [2024-10-01 15:58:57.398179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.729 [2024-10-01 15:58:57.398309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.729 [2024-10-01 15:58:57.398321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.729 [2024-10-01 15:58:57.398348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.729 [2024-10-01 15:58:57.398355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.729 [2024-10-01 15:58:57.398362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.729 [2024-10-01 15:58:57.398370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.729 [2024-10-01 15:58:57.398377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.729 [2024-10-01 15:58:57.398383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.729 [2024-10-01 15:58:57.398511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.729 [2024-10-01 15:58:57.398520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.729 [2024-10-01 15:58:57.409808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.729 [2024-10-01 15:58:57.409829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.729 [2024-10-01 15:58:57.410005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-10-01 15:58:57.410019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.729 [2024-10-01 15:58:57.410027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.729 [2024-10-01 15:58:57.410146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.729 [2024-10-01 15:58:57.410156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.729 [2024-10-01 15:58:57.410163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.729 [2024-10-01 15:58:57.410174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.729 [2024-10-01 15:58:57.410183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.729 [2024-10-01 15:58:57.410197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.729 [2024-10-01 15:58:57.410203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.729 [2024-10-01 15:58:57.410210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.730 [2024-10-01 15:58:57.410218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.730 [2024-10-01 15:58:57.410224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.730 [2024-10-01 15:58:57.410230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.730 [2024-10-01 15:58:57.410244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.730 [2024-10-01 15:58:57.410251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.730 [2024-10-01 15:58:57.421084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.730 [2024-10-01 15:58:57.421108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.730 [2024-10-01 15:58:57.421339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-10-01 15:58:57.421353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.730 [2024-10-01 15:58:57.421361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.730 [2024-10-01 15:58:57.421514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-10-01 15:58:57.421525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.730 [2024-10-01 15:58:57.421532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.730 [2024-10-01 15:58:57.421545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.730 [2024-10-01 15:58:57.421554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.730 [2024-10-01 15:58:57.421565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.730 [2024-10-01 15:58:57.421572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.730 [2024-10-01 15:58:57.421578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.730 [2024-10-01 15:58:57.421588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.730 [2024-10-01 15:58:57.421594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.730 [2024-10-01 15:58:57.421600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.730 [2024-10-01 15:58:57.421614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.730 [2024-10-01 15:58:57.421621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.730 [2024-10-01 15:58:57.432652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.730 [2024-10-01 15:58:57.432676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.730 [2024-10-01 15:58:57.433010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-10-01 15:58:57.433027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.730 [2024-10-01 15:58:57.433035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.730 [2024-10-01 15:58:57.433114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-10-01 15:58:57.433124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.730 [2024-10-01 15:58:57.433131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.730 [2024-10-01 15:58:57.433274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.730 [2024-10-01 15:58:57.433287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.730 [2024-10-01 15:58:57.433632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.730 [2024-10-01 15:58:57.433644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.730 [2024-10-01 15:58:57.433651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.730 [2024-10-01 15:58:57.433660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.730 [2024-10-01 15:58:57.433667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.730 [2024-10-01 15:58:57.433673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.730 [2024-10-01 15:58:57.433828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.730 [2024-10-01 15:58:57.433838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.730 [2024-10-01 15:58:57.443655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.730 [2024-10-01 15:58:57.443677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.730 [2024-10-01 15:58:57.443835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-10-01 15:58:57.443848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.730 [2024-10-01 15:58:57.443856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.730 [2024-10-01 15:58:57.443967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-10-01 15:58:57.443977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.730 [2024-10-01 15:58:57.443984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.730 [2024-10-01 15:58:57.443998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.730 [2024-10-01 15:58:57.444010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.730 [2024-10-01 15:58:57.444021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.730 [2024-10-01 15:58:57.444028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.730 [2024-10-01 15:58:57.444034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.730 [2024-10-01 15:58:57.444043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.730 [2024-10-01 15:58:57.444051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.730 [2024-10-01 15:58:57.444057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.730 [2024-10-01 15:58:57.444074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.730 [2024-10-01 15:58:57.444081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.730 [2024-10-01 15:58:57.454587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.730 [2024-10-01 15:58:57.454609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.730 [2024-10-01 15:58:57.454820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-10-01 15:58:57.454833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.730 [2024-10-01 15:58:57.454840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.730 [2024-10-01 15:58:57.455041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-10-01 15:58:57.455053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.730 [2024-10-01 15:58:57.455060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.730 [2024-10-01 15:58:57.455073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.730 [2024-10-01 15:58:57.455082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.730 [2024-10-01 15:58:57.455092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.730 [2024-10-01 15:58:57.455098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.730 [2024-10-01 15:58:57.455105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.730 [2024-10-01 15:58:57.455114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.730 [2024-10-01 15:58:57.455120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.730 [2024-10-01 15:58:57.455127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.730 [2024-10-01 15:58:57.455140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.730 [2024-10-01 15:58:57.455147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.730 [2024-10-01 15:58:57.466580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.730 [2024-10-01 15:58:57.466602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.730 [2024-10-01 15:58:57.466831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-10-01 15:58:57.466846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.730 [2024-10-01 15:58:57.466854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.730 [2024-10-01 15:58:57.466943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.730 [2024-10-01 15:58:57.466953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.730 [2024-10-01 15:58:57.466961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.730 [2024-10-01 15:58:57.467106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.730 [2024-10-01 15:58:57.467118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.730 [2024-10-01 15:58:57.467144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.730 [2024-10-01 15:58:57.467155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.730 [2024-10-01 15:58:57.467162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.730 [2024-10-01 15:58:57.467171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.730 [2024-10-01 15:58:57.467177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.731 [2024-10-01 15:58:57.467183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.731 [2024-10-01 15:58:57.467197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.731 [2024-10-01 15:58:57.467203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.731 [2024-10-01 15:58:57.477401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.731 [2024-10-01 15:58:57.477422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.731 [2024-10-01 15:58:57.477524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-10-01 15:58:57.477536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.731 [2024-10-01 15:58:57.477544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.731 [2024-10-01 15:58:57.477637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-10-01 15:58:57.477647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.731 [2024-10-01 15:58:57.477654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.731 [2024-10-01 15:58:57.477666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.731 [2024-10-01 15:58:57.477676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.731 [2024-10-01 15:58:57.477686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.731 [2024-10-01 15:58:57.477693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.731 [2024-10-01 15:58:57.477700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.731 [2024-10-01 15:58:57.477708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.731 [2024-10-01 15:58:57.477714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.731 [2024-10-01 15:58:57.477720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.731 [2024-10-01 15:58:57.477734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.731 [2024-10-01 15:58:57.477741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.731 [2024-10-01 15:58:57.489600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.731 [2024-10-01 15:58:57.489623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.731 [2024-10-01 15:58:57.489739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-10-01 15:58:57.489752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.731 [2024-10-01 15:58:57.489759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.731 [2024-10-01 15:58:57.489917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-10-01 15:58:57.489927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.731 [2024-10-01 15:58:57.489934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.731 [2024-10-01 15:58:57.489945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.731 [2024-10-01 15:58:57.489954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.731 [2024-10-01 15:58:57.489965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.731 [2024-10-01 15:58:57.489971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.731 [2024-10-01 15:58:57.489977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.731 [2024-10-01 15:58:57.489986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.731 [2024-10-01 15:58:57.489991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.731 [2024-10-01 15:58:57.489997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.731 [2024-10-01 15:58:57.490011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.731 [2024-10-01 15:58:57.490018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.731 [2024-10-01 15:58:57.499682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.731 [2024-10-01 15:58:57.499712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.731 [2024-10-01 15:58:57.499806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-10-01 15:58:57.499818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.731 [2024-10-01 15:58:57.499826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.731 [2024-10-01 15:58:57.499911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-10-01 15:58:57.499921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.731 [2024-10-01 15:58:57.499928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.731 [2024-10-01 15:58:57.499936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.731 [2024-10-01 15:58:57.499947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.731 [2024-10-01 15:58:57.499955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.731 [2024-10-01 15:58:57.499961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.731 [2024-10-01 15:58:57.499967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.731 [2024-10-01 15:58:57.499980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.731 [2024-10-01 15:58:57.499986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.731 [2024-10-01 15:58:57.499991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.731 [2024-10-01 15:58:57.499998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.731 [2024-10-01 15:58:57.500010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.731 [2024-10-01 15:58:57.510029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.731 [2024-10-01 15:58:57.510050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.731 [2024-10-01 15:58:57.510266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-10-01 15:58:57.510279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.731 [2024-10-01 15:58:57.510287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.731 [2024-10-01 15:58:57.510419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-10-01 15:58:57.510429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.731 [2024-10-01 15:58:57.510435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.731 [2024-10-01 15:58:57.510447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.731 [2024-10-01 15:58:57.510456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.731 [2024-10-01 15:58:57.510467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.731 [2024-10-01 15:58:57.510473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.731 [2024-10-01 15:58:57.510479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.731 [2024-10-01 15:58:57.510487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.731 [2024-10-01 15:58:57.510493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.731 [2024-10-01 15:58:57.510499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.731 [2024-10-01 15:58:57.510513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.731 [2024-10-01 15:58:57.510519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.731 [2024-10-01 15:58:57.520108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.731 [2024-10-01 15:58:57.520138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.731 [2024-10-01 15:58:57.520353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-10-01 15:58:57.520365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.731 [2024-10-01 15:58:57.520372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.731 [2024-10-01 15:58:57.520526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.731 [2024-10-01 15:58:57.520536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.731 [2024-10-01 15:58:57.520543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.731 [2024-10-01 15:58:57.520551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.731 [2024-10-01 15:58:57.520563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.731 [2024-10-01 15:58:57.520570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.731 [2024-10-01 15:58:57.520576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.731 [2024-10-01 15:58:57.520585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.731 [2024-10-01 15:58:57.520598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.731 [2024-10-01 15:58:57.520605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.731 [2024-10-01 15:58:57.520611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.731 [2024-10-01 15:58:57.520617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.731 [2024-10-01 15:58:57.520628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.731 [2024-10-01 15:58:57.530640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.732 [2024-10-01 15:58:57.530661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.732 [2024-10-01 15:58:57.530783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-10-01 15:58:57.530795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.732 [2024-10-01 15:58:57.530803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.732 [2024-10-01 15:58:57.530953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-10-01 15:58:57.530964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.732 [2024-10-01 15:58:57.530970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.732 [2024-10-01 15:58:57.530982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.732 [2024-10-01 15:58:57.530991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.732 [2024-10-01 15:58:57.531001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.732 [2024-10-01 15:58:57.531007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.732 [2024-10-01 15:58:57.531013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.732 [2024-10-01 15:58:57.531022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.732 [2024-10-01 15:58:57.531028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.732 [2024-10-01 15:58:57.531034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.732 [2024-10-01 15:58:57.531423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.732 [2024-10-01 15:58:57.531434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.732 [2024-10-01 15:58:57.541868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.732 [2024-10-01 15:58:57.541888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.732 [2024-10-01 15:58:57.542005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-10-01 15:58:57.542018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.732 [2024-10-01 15:58:57.542025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.732 [2024-10-01 15:58:57.542119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-10-01 15:58:57.542128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.732 [2024-10-01 15:58:57.542141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.732 [2024-10-01 15:58:57.542153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.732 [2024-10-01 15:58:57.542162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.732 [2024-10-01 15:58:57.542172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.732 [2024-10-01 15:58:57.542178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.732 [2024-10-01 15:58:57.542184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.732 [2024-10-01 15:58:57.542192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.732 [2024-10-01 15:58:57.542198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.732 [2024-10-01 15:58:57.542204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.732 [2024-10-01 15:58:57.542217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.732 [2024-10-01 15:58:57.542224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.732 [2024-10-01 15:58:57.552684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.732 [2024-10-01 15:58:57.552706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.732 [2024-10-01 15:58:57.553627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-10-01 15:58:57.553646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.732 [2024-10-01 15:58:57.553654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.732 [2024-10-01 15:58:57.553744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-10-01 15:58:57.553754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.732 [2024-10-01 15:58:57.553761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.732 [2024-10-01 15:58:57.553833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.732 [2024-10-01 15:58:57.553843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.732 [2024-10-01 15:58:57.553854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.732 [2024-10-01 15:58:57.553860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.732 [2024-10-01 15:58:57.553872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.732 [2024-10-01 15:58:57.553882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.732 [2024-10-01 15:58:57.553888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.732 [2024-10-01 15:58:57.553894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.732 [2024-10-01 15:58:57.553908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.732 [2024-10-01 15:58:57.553915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.732 [2024-10-01 15:58:57.565876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.732 [2024-10-01 15:58:57.565902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.732 [2024-10-01 15:58:57.566259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-10-01 15:58:57.566275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.732 [2024-10-01 15:58:57.566282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.732 [2024-10-01 15:58:57.566432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-10-01 15:58:57.566442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.732 [2024-10-01 15:58:57.566449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.732 [2024-10-01 15:58:57.566798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.732 [2024-10-01 15:58:57.566813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.732 [2024-10-01 15:58:57.566978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.732 [2024-10-01 15:58:57.566989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.732 [2024-10-01 15:58:57.566996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.732 [2024-10-01 15:58:57.567005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.732 [2024-10-01 15:58:57.567011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.732 [2024-10-01 15:58:57.567018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.732 [2024-10-01 15:58:57.567161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.732 [2024-10-01 15:58:57.567170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.732 [2024-10-01 15:58:57.576904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.732 [2024-10-01 15:58:57.576926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.732 [2024-10-01 15:58:57.577173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-10-01 15:58:57.577190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.732 [2024-10-01 15:58:57.577198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.732 [2024-10-01 15:58:57.577336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.732 [2024-10-01 15:58:57.577346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.732 [2024-10-01 15:58:57.577353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.732 [2024-10-01 15:58:57.577496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.732 [2024-10-01 15:58:57.577508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.732 [2024-10-01 15:58:57.577645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.733 [2024-10-01 15:58:57.577655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.733 [2024-10-01 15:58:57.577662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.733 [2024-10-01 15:58:57.577675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.733 [2024-10-01 15:58:57.577681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.733 [2024-10-01 15:58:57.577687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.733 [2024-10-01 15:58:57.577716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.733 [2024-10-01 15:58:57.577723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.733 [2024-10-01 15:58:57.587886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.733 [2024-10-01 15:58:57.587907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.733 [2024-10-01 15:58:57.588048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-10-01 15:58:57.588060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.733 [2024-10-01 15:58:57.588068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.733 [2024-10-01 15:58:57.588150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-10-01 15:58:57.588159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.733 [2024-10-01 15:58:57.588167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.733 [2024-10-01 15:58:57.588178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.733 [2024-10-01 15:58:57.588187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.733 [2024-10-01 15:58:57.588197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.733 [2024-10-01 15:58:57.588203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.733 [2024-10-01 15:58:57.588210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.733 [2024-10-01 15:58:57.588219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.733 [2024-10-01 15:58:57.588224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.733 [2024-10-01 15:58:57.588230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.733 [2024-10-01 15:58:57.588244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.733 [2024-10-01 15:58:57.588250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.733 [2024-10-01 15:58:57.600263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.733 [2024-10-01 15:58:57.600284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.733 [2024-10-01 15:58:57.600475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-10-01 15:58:57.600487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.733 [2024-10-01 15:58:57.600495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.733 [2024-10-01 15:58:57.600638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-10-01 15:58:57.600647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.733 [2024-10-01 15:58:57.600655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.733 [2024-10-01 15:58:57.600669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.733 [2024-10-01 15:58:57.600678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.733 [2024-10-01 15:58:57.600688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.733 [2024-10-01 15:58:57.600694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.733 [2024-10-01 15:58:57.600700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.733 [2024-10-01 15:58:57.600708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.733 [2024-10-01 15:58:57.600715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.733 [2024-10-01 15:58:57.600721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.733 [2024-10-01 15:58:57.600734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.733 [2024-10-01 15:58:57.600740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.733 [2024-10-01 15:58:57.613425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.733 [2024-10-01 15:58:57.613447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.733 [2024-10-01 15:58:57.613610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-10-01 15:58:57.613622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.733 [2024-10-01 15:58:57.613629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.733 [2024-10-01 15:58:57.613845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-10-01 15:58:57.613854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.733 [2024-10-01 15:58:57.613861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.733 [2024-10-01 15:58:57.614313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.733 [2024-10-01 15:58:57.614326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.733 [2024-10-01 15:58:57.614528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.733 [2024-10-01 15:58:57.614539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.733 [2024-10-01 15:58:57.614545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.733 [2024-10-01 15:58:57.614555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.733 [2024-10-01 15:58:57.614562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.733 [2024-10-01 15:58:57.614568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.733 [2024-10-01 15:58:57.614713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.733 [2024-10-01 15:58:57.614723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.733 [2024-10-01 15:58:57.624738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.733 [2024-10-01 15:58:57.624758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.733 [2024-10-01 15:58:57.624936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-10-01 15:58:57.624949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.733 [2024-10-01 15:58:57.624957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.733 [2024-10-01 15:58:57.625121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-10-01 15:58:57.625131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.733 [2024-10-01 15:58:57.625137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.733 [2024-10-01 15:58:57.625388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.733 [2024-10-01 15:58:57.625401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.733 [2024-10-01 15:58:57.625638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.733 [2024-10-01 15:58:57.625649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.733 [2024-10-01 15:58:57.625656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.733 [2024-10-01 15:58:57.625665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.733 [2024-10-01 15:58:57.625672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.733 [2024-10-01 15:58:57.625678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.733 [2024-10-01 15:58:57.625827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.733 [2024-10-01 15:58:57.625836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.733 [2024-10-01 15:58:57.635649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.733 [2024-10-01 15:58:57.635669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.733 [2024-10-01 15:58:57.635837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-10-01 15:58:57.635849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.733 [2024-10-01 15:58:57.635857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.733 [2024-10-01 15:58:57.635998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.733 [2024-10-01 15:58:57.636008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.733 [2024-10-01 15:58:57.636015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.733 [2024-10-01 15:58:57.636027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.733 [2024-10-01 15:58:57.636036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.733 [2024-10-01 15:58:57.636045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.733 [2024-10-01 15:58:57.636052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.733 [2024-10-01 15:58:57.636058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.733 [2024-10-01 15:58:57.636065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.734 [2024-10-01 15:58:57.636075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.734 [2024-10-01 15:58:57.636081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.734 [2024-10-01 15:58:57.636095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.734 [2024-10-01 15:58:57.636101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.734 [2024-10-01 15:58:57.647888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.734 [2024-10-01 15:58:57.647910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.734 [2024-10-01 15:58:57.648164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-10-01 15:58:57.648180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.734 [2024-10-01 15:58:57.648187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.734 [2024-10-01 15:58:57.648387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-10-01 15:58:57.648398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.734 [2024-10-01 15:58:57.648405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.734 [2024-10-01 15:58:57.648548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.734 [2024-10-01 15:58:57.648561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.734 [2024-10-01 15:58:57.648710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.734 [2024-10-01 15:58:57.648721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.734 [2024-10-01 15:58:57.648727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.734 [2024-10-01 15:58:57.648737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.734 [2024-10-01 15:58:57.648743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.734 [2024-10-01 15:58:57.648750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.734 [2024-10-01 15:58:57.648779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.734 [2024-10-01 15:58:57.648786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.734 [2024-10-01 15:58:57.658611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.734 [2024-10-01 15:58:57.658632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.734 [2024-10-01 15:58:57.658793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-10-01 15:58:57.658805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.734 [2024-10-01 15:58:57.658812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.734 [2024-10-01 15:58:57.658983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-10-01 15:58:57.658993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.734 [2024-10-01 15:58:57.659000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.734 [2024-10-01 15:58:57.659011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.734 [2024-10-01 15:58:57.659024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.734 [2024-10-01 15:58:57.659033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.734 [2024-10-01 15:58:57.659039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.734 [2024-10-01 15:58:57.659045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.734 [2024-10-01 15:58:57.659053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.734 [2024-10-01 15:58:57.659059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.734 [2024-10-01 15:58:57.659065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.734 [2024-10-01 15:58:57.659079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.734 [2024-10-01 15:58:57.659085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.734 [2024-10-01 15:58:57.670796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.734 [2024-10-01 15:58:57.670817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.734 [2024-10-01 15:58:57.670940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-10-01 15:58:57.670953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.734 [2024-10-01 15:58:57.670961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.734 [2024-10-01 15:58:57.671106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-10-01 15:58:57.671116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.734 [2024-10-01 15:58:57.671122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.734 [2024-10-01 15:58:57.671134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.734 [2024-10-01 15:58:57.671143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.734 [2024-10-01 15:58:57.671153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.734 [2024-10-01 15:58:57.671159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.734 [2024-10-01 15:58:57.671166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.734 [2024-10-01 15:58:57.671174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.734 [2024-10-01 15:58:57.671180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.734 [2024-10-01 15:58:57.671186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.734 [2024-10-01 15:58:57.671200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.734 [2024-10-01 15:58:57.671206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.734 [2024-10-01 15:58:57.682174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.734 [2024-10-01 15:58:57.682195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.734 [2024-10-01 15:58:57.682297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-10-01 15:58:57.682313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.734 [2024-10-01 15:58:57.682320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.734 [2024-10-01 15:58:57.682453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-10-01 15:58:57.682463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.734 [2024-10-01 15:58:57.682470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.734 [2024-10-01 15:58:57.682482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.734 [2024-10-01 15:58:57.682491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.734 [2024-10-01 15:58:57.682501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.734 [2024-10-01 15:58:57.682508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.734 [2024-10-01 15:58:57.682514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.734 [2024-10-01 15:58:57.682523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.734 [2024-10-01 15:58:57.682528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.734 [2024-10-01 15:58:57.682534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.734 [2024-10-01 15:58:57.682548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.734 [2024-10-01 15:58:57.682555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.734 [2024-10-01 15:58:57.693258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.734 [2024-10-01 15:58:57.693279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.734 [2024-10-01 15:58:57.693484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-10-01 15:58:57.693496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.734 [2024-10-01 15:58:57.693503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.734 [2024-10-01 15:58:57.693676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.734 [2024-10-01 15:58:57.693686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.734 [2024-10-01 15:58:57.693693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.734 [2024-10-01 15:58:57.693704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.734 [2024-10-01 15:58:57.693713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.734 [2024-10-01 15:58:57.693723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.734 [2024-10-01 15:58:57.693729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.734 [2024-10-01 15:58:57.693735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.734 [2024-10-01 15:58:57.693743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.734 [2024-10-01 15:58:57.693749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.734 [2024-10-01 15:58:57.693759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.734 [2024-10-01 15:58:57.693773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.735 [2024-10-01 15:58:57.693779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.735 [2024-10-01 15:58:57.704646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.735 [2024-10-01 15:58:57.704666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.735 [2024-10-01 15:58:57.704836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-10-01 15:58:57.704849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.735 [2024-10-01 15:58:57.704857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.735 [2024-10-01 15:58:57.704964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-10-01 15:58:57.704975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.735 [2024-10-01 15:58:57.704981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.735 [2024-10-01 15:58:57.704993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.735 [2024-10-01 15:58:57.705002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.735 [2024-10-01 15:58:57.705012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.735 [2024-10-01 15:58:57.705018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.735 [2024-10-01 15:58:57.705024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.735 [2024-10-01 15:58:57.705033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.735 [2024-10-01 15:58:57.705038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.735 [2024-10-01 15:58:57.705044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.735 [2024-10-01 15:58:57.705057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.735 [2024-10-01 15:58:57.705064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.735 [2024-10-01 15:58:57.716835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.735 [2024-10-01 15:58:57.716855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.735 [2024-10-01 15:58:57.717030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-10-01 15:58:57.717042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.735 [2024-10-01 15:58:57.717050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.735 [2024-10-01 15:58:57.717141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-10-01 15:58:57.717151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.735 [2024-10-01 15:58:57.717157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.735 [2024-10-01 15:58:57.717169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.735 [2024-10-01 15:58:57.717178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.735 [2024-10-01 15:58:57.717191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.735 [2024-10-01 15:58:57.717197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.735 [2024-10-01 15:58:57.717203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.735 [2024-10-01 15:58:57.717212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.735 [2024-10-01 15:58:57.717217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.735 [2024-10-01 15:58:57.717223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.735 [2024-10-01 15:58:57.717237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.735 [2024-10-01 15:58:57.717244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.735 [2024-10-01 15:58:57.728467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.735 [2024-10-01 15:58:57.728488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.735 [2024-10-01 15:58:57.728732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-10-01 15:58:57.728748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.735 [2024-10-01 15:58:57.728755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.735 [2024-10-01 15:58:57.728860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-10-01 15:58:57.728876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.735 [2024-10-01 15:58:57.728883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.735 [2024-10-01 15:58:57.729035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.735 [2024-10-01 15:58:57.729048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.735 [2024-10-01 15:58:57.729074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.735 [2024-10-01 15:58:57.729081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.735 [2024-10-01 15:58:57.729088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.735 [2024-10-01 15:58:57.729097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.735 [2024-10-01 15:58:57.729103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.735 [2024-10-01 15:58:57.729109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.735 [2024-10-01 15:58:57.729122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.735 [2024-10-01 15:58:57.729129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.735 [2024-10-01 15:58:57.739229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.735 [2024-10-01 15:58:57.739250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.735 [2024-10-01 15:58:57.739541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-10-01 15:58:57.739556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.735 [2024-10-01 15:58:57.739567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.735 [2024-10-01 15:58:57.739712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-10-01 15:58:57.739722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.735 [2024-10-01 15:58:57.739729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.735 [2024-10-01 15:58:57.739880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.735 [2024-10-01 15:58:57.739893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.735 [2024-10-01 15:58:57.740030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.735 [2024-10-01 15:58:57.740041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.735 [2024-10-01 15:58:57.740047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.735 [2024-10-01 15:58:57.740056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.735 [2024-10-01 15:58:57.740062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.735 [2024-10-01 15:58:57.740068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.735 [2024-10-01 15:58:57.740097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.735 [2024-10-01 15:58:57.740104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.735 [2024-10-01 15:58:57.750776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.735 [2024-10-01 15:58:57.750797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.735 [2024-10-01 15:58:57.750909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-10-01 15:58:57.750922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.735 [2024-10-01 15:58:57.750929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.735 [2024-10-01 15:58:57.751103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.735 [2024-10-01 15:58:57.751113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.735 [2024-10-01 15:58:57.751120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.735 [2024-10-01 15:58:57.751131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.735 [2024-10-01 15:58:57.751140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.735 [2024-10-01 15:58:57.751149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.735 [2024-10-01 15:58:57.751156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.735 [2024-10-01 15:58:57.751162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.735 [2024-10-01 15:58:57.751170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.735 [2024-10-01 15:58:57.751176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.735 [2024-10-01 15:58:57.751182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.735 [2024-10-01 15:58:57.751199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.735 [2024-10-01 15:58:57.751206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.735 [2024-10-01 15:58:57.764053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.735 [2024-10-01 15:58:57.764074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.736 [2024-10-01 15:58:57.764311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-10-01 15:58:57.764332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.736 [2024-10-01 15:58:57.764340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.736 [2024-10-01 15:58:57.764421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-10-01 15:58:57.764431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.736 [2024-10-01 15:58:57.764438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.736 [2024-10-01 15:58:57.764612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.736 [2024-10-01 15:58:57.764625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.736 [2024-10-01 15:58:57.764765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.736 [2024-10-01 15:58:57.764776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.736 [2024-10-01 15:58:57.764782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.736 [2024-10-01 15:58:57.764792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.736 [2024-10-01 15:58:57.764798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.736 [2024-10-01 15:58:57.764804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.736 [2024-10-01 15:58:57.764834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.736 [2024-10-01 15:58:57.764841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.736 [2024-10-01 15:58:57.775203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.736 [2024-10-01 15:58:57.775225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.736 [2024-10-01 15:58:57.775465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-10-01 15:58:57.775480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.736 [2024-10-01 15:58:57.775487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.736 [2024-10-01 15:58:57.775579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-10-01 15:58:57.775588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.736 [2024-10-01 15:58:57.775595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.736 [2024-10-01 15:58:57.775725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.736 [2024-10-01 15:58:57.775737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.736 [2024-10-01 15:58:57.775881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.736 [2024-10-01 15:58:57.775895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.736 [2024-10-01 15:58:57.775902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.736 [2024-10-01 15:58:57.775910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.736 [2024-10-01 15:58:57.775916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.736 [2024-10-01 15:58:57.775922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.736 [2024-10-01 15:58:57.775953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.736 [2024-10-01 15:58:57.775960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.736 [2024-10-01 15:58:57.785936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.736 [2024-10-01 15:58:57.785958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.736 [2024-10-01 15:58:57.786303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-10-01 15:58:57.786319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.736 [2024-10-01 15:58:57.786327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.736 [2024-10-01 15:58:57.786462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-10-01 15:58:57.786472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.736 [2024-10-01 15:58:57.786479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.736 [2024-10-01 15:58:57.786622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.736 [2024-10-01 15:58:57.786634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.736 [2024-10-01 15:58:57.786783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.736 [2024-10-01 15:58:57.786794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.736 [2024-10-01 15:58:57.786801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.736 [2024-10-01 15:58:57.786811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.736 [2024-10-01 15:58:57.786817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.736 [2024-10-01 15:58:57.786823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.736 [2024-10-01 15:58:57.786852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.736 [2024-10-01 15:58:57.786860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.736 [2024-10-01 15:58:57.796459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.736 [2024-10-01 15:58:57.796481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.736 [2024-10-01 15:58:57.796598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-10-01 15:58:57.796611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.736 [2024-10-01 15:58:57.796618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.736 [2024-10-01 15:58:57.796702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-10-01 15:58:57.796712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.736 [2024-10-01 15:58:57.796719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.736 [2024-10-01 15:58:57.796848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.736 [2024-10-01 15:58:57.796861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.736 [2024-10-01 15:58:57.796896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.736 [2024-10-01 15:58:57.796903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.736 [2024-10-01 15:58:57.796910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.736 [2024-10-01 15:58:57.796919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.736 [2024-10-01 15:58:57.796925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.736 [2024-10-01 15:58:57.796931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.736 [2024-10-01 15:58:57.797059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.736 [2024-10-01 15:58:57.797068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.736 [2024-10-01 15:58:57.808714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.736 [2024-10-01 15:58:57.808735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.736 [2024-10-01 15:58:57.808943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-10-01 15:58:57.808957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.736 [2024-10-01 15:58:57.808964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.736 [2024-10-01 15:58:57.809186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-10-01 15:58:57.809196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.736 [2024-10-01 15:58:57.809203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.736 [2024-10-01 15:58:57.809215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.736 [2024-10-01 15:58:57.809224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.736 [2024-10-01 15:58:57.809234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.736 [2024-10-01 15:58:57.809241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.736 [2024-10-01 15:58:57.809247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.736 [2024-10-01 15:58:57.809256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.736 [2024-10-01 15:58:57.809261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.736 [2024-10-01 15:58:57.809268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.736 [2024-10-01 15:58:57.809281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.736 [2024-10-01 15:58:57.809288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.736 [2024-10-01 15:58:57.819502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.736 [2024-10-01 15:58:57.819524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.736 [2024-10-01 15:58:57.819686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.736 [2024-10-01 15:58:57.819699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.737 [2024-10-01 15:58:57.819706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.737 [2024-10-01 15:58:57.819842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-10-01 15:58:57.819852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.737 [2024-10-01 15:58:57.819859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.737 [2024-10-01 15:58:57.819876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.737 [2024-10-01 15:58:57.819886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.737 [2024-10-01 15:58:57.819896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.737 [2024-10-01 15:58:57.819902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.737 [2024-10-01 15:58:57.819909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.737 [2024-10-01 15:58:57.819917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.737 [2024-10-01 15:58:57.819923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.737 [2024-10-01 15:58:57.819929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.737 [2024-10-01 15:58:57.819943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.737 [2024-10-01 15:58:57.819949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.737 [2024-10-01 15:58:57.830287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.737 [2024-10-01 15:58:57.830308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.737 [2024-10-01 15:58:57.830545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-10-01 15:58:57.830567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.737 [2024-10-01 15:58:57.830574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.737 [2024-10-01 15:58:57.830716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-10-01 15:58:57.830726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.737 [2024-10-01 15:58:57.830732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.737 [2024-10-01 15:58:57.830744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.737 [2024-10-01 15:58:57.830753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.737 [2024-10-01 15:58:57.830763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.737 [2024-10-01 15:58:57.830770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.737 [2024-10-01 15:58:57.830782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.737 [2024-10-01 15:58:57.830791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.737 [2024-10-01 15:58:57.830797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.737 [2024-10-01 15:58:57.830803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.737 [2024-10-01 15:58:57.830817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.737 [2024-10-01 15:58:57.830823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.737 [2024-10-01 15:58:57.841695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.737 [2024-10-01 15:58:57.841717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.737 [2024-10-01 15:58:57.842084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-10-01 15:58:57.842101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.737 [2024-10-01 15:58:57.842109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.737 [2024-10-01 15:58:57.842324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-10-01 15:58:57.842334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.737 [2024-10-01 15:58:57.842341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.737 [2024-10-01 15:58:57.842484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.737 [2024-10-01 15:58:57.842497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.737 [2024-10-01 15:58:57.842523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.737 [2024-10-01 15:58:57.842530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.737 [2024-10-01 15:58:57.842537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.737 [2024-10-01 15:58:57.842546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.737 [2024-10-01 15:58:57.842551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.737 [2024-10-01 15:58:57.842557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.737 [2024-10-01 15:58:57.842571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.737 [2024-10-01 15:58:57.842578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.737 [2024-10-01 15:58:57.852210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.737 [2024-10-01 15:58:57.852231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.737 [2024-10-01 15:58:57.852473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-10-01 15:58:57.852492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.737 [2024-10-01 15:58:57.852499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.737 [2024-10-01 15:58:57.852639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-10-01 15:58:57.852648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.737 [2024-10-01 15:58:57.852659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.737 [2024-10-01 15:58:57.852799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.737 [2024-10-01 15:58:57.852811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.737 [2024-10-01 15:58:57.852905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.737 [2024-10-01 15:58:57.852913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.737 [2024-10-01 15:58:57.852920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.737 [2024-10-01 15:58:57.852930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.737 [2024-10-01 15:58:57.852935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.737 [2024-10-01 15:58:57.852941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.737 [2024-10-01 15:58:57.853019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.737 [2024-10-01 15:58:57.853028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.737 [2024-10-01 15:58:57.862553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.737 [2024-10-01 15:58:57.862574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.737 [2024-10-01 15:58:57.862745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-10-01 15:58:57.862758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.737 [2024-10-01 15:58:57.862766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.737 [2024-10-01 15:58:57.862962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.737 [2024-10-01 15:58:57.862972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.737 [2024-10-01 15:58:57.862979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.737 [2024-10-01 15:58:57.863218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.737 [2024-10-01 15:58:57.863232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.737 [2024-10-01 15:58:57.863268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.737 [2024-10-01 15:58:57.863276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.737 [2024-10-01 15:58:57.863282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.737 [2024-10-01 15:58:57.863291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.737 [2024-10-01 15:58:57.863296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.738 [2024-10-01 15:58:57.863303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.738 [2024-10-01 15:58:57.863317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.738 [2024-10-01 15:58:57.863324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.738 [2024-10-01 15:58:57.874330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.738 [2024-10-01 15:58:57.874354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.738 [2024-10-01 15:58:57.874557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-10-01 15:58:57.874570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.738 [2024-10-01 15:58:57.874577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.738 [2024-10-01 15:58:57.874767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-10-01 15:58:57.874784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.738 [2024-10-01 15:58:57.874791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.738 [2024-10-01 15:58:57.874988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.738 [2024-10-01 15:58:57.875003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.738 [2024-10-01 15:58:57.875096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.738 [2024-10-01 15:58:57.875105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.738 [2024-10-01 15:58:57.875111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.738 [2024-10-01 15:58:57.875120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.738 [2024-10-01 15:58:57.875127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.738 [2024-10-01 15:58:57.875133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.738 [2024-10-01 15:58:57.875152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.738 [2024-10-01 15:58:57.875160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.738 [2024-10-01 15:58:57.885000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.738 [2024-10-01 15:58:57.885022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.738 [2024-10-01 15:58:57.885595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-10-01 15:58:57.885613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.738 [2024-10-01 15:58:57.885621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.738 [2024-10-01 15:58:57.885756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-10-01 15:58:57.885766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.738 [2024-10-01 15:58:57.885773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.738 [2024-10-01 15:58:57.885936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.738 [2024-10-01 15:58:57.885949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.738 [2024-10-01 15:58:57.885977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.738 [2024-10-01 15:58:57.885985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.738 [2024-10-01 15:58:57.885991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.738 [2024-10-01 15:58:57.886003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.738 [2024-10-01 15:58:57.886009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.738 [2024-10-01 15:58:57.886015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.738 [2024-10-01 15:58:57.886028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.738 [2024-10-01 15:58:57.886035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.738 [2024-10-01 15:58:57.895955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.738 [2024-10-01 15:58:57.895976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.738 [2024-10-01 15:58:57.896210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-10-01 15:58:57.896225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.738 [2024-10-01 15:58:57.896233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.738 [2024-10-01 15:58:57.896411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-10-01 15:58:57.896422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.738 [2024-10-01 15:58:57.896430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.738 [2024-10-01 15:58:57.896573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.738 [2024-10-01 15:58:57.896585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.738 [2024-10-01 15:58:57.896611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.738 [2024-10-01 15:58:57.896618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.738 [2024-10-01 15:58:57.896625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.738 [2024-10-01 15:58:57.896633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.738 [2024-10-01 15:58:57.896639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.738 [2024-10-01 15:58:57.896646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.738 [2024-10-01 15:58:57.896773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.738 [2024-10-01 15:58:57.896782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.738 [2024-10-01 15:58:57.907117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.738 [2024-10-01 15:58:57.907138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.738 [2024-10-01 15:58:57.907480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-10-01 15:58:57.907496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.738 [2024-10-01 15:58:57.907504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.738 [2024-10-01 15:58:57.907643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-10-01 15:58:57.907653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.738 [2024-10-01 15:58:57.907660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.738 [2024-10-01 15:58:57.907807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.738 [2024-10-01 15:58:57.907819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.738 [2024-10-01 15:58:57.907962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.738 [2024-10-01 15:58:57.907973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.738 [2024-10-01 15:58:57.907980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.738 [2024-10-01 15:58:57.907988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.738 [2024-10-01 15:58:57.907994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.738 [2024-10-01 15:58:57.908000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.738 [2024-10-01 15:58:57.908030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.738 [2024-10-01 15:58:57.908038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.738 [2024-10-01 15:58:57.918124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.738 [2024-10-01 15:58:57.918145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.738 [2024-10-01 15:58:57.918385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-10-01 15:58:57.918404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.738 [2024-10-01 15:58:57.918411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.738 [2024-10-01 15:58:57.918568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.738 [2024-10-01 15:58:57.918578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.738 [2024-10-01 15:58:57.918584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.738 [2024-10-01 15:58:57.918596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.738 [2024-10-01 15:58:57.918605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.738 [2024-10-01 15:58:57.918615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.738 [2024-10-01 15:58:57.918622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.738 [2024-10-01 15:58:57.918628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.738 [2024-10-01 15:58:57.918636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.738 [2024-10-01 15:58:57.918642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.738 [2024-10-01 15:58:57.918648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.738 [2024-10-01 15:58:57.918662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.738 [2024-10-01 15:58:57.918669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.739 [2024-10-01 15:58:57.930596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.739 [2024-10-01 15:58:57.930617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.739 [2024-10-01 15:58:57.930774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-10-01 15:58:57.930787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.739 [2024-10-01 15:58:57.930794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.739 [2024-10-01 15:58:57.930985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-10-01 15:58:57.930996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.739 [2024-10-01 15:58:57.931003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.739 [2024-10-01 15:58:57.931390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.739 [2024-10-01 15:58:57.931404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.739 [2024-10-01 15:58:57.931563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.739 [2024-10-01 15:58:57.931573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.739 [2024-10-01 15:58:57.931580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.739 [2024-10-01 15:58:57.931589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.739 [2024-10-01 15:58:57.931595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.739 [2024-10-01 15:58:57.931601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.739 [2024-10-01 15:58:57.931744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.739 [2024-10-01 15:58:57.931754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.739 [2024-10-01 15:58:57.942528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.739 [2024-10-01 15:58:57.942549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.739 [2024-10-01 15:58:57.942687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-10-01 15:58:57.942699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.739 [2024-10-01 15:58:57.942707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.739 [2024-10-01 15:58:57.942921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-10-01 15:58:57.942931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.739 [2024-10-01 15:58:57.942938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.739 [2024-10-01 15:58:57.943392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.739 [2024-10-01 15:58:57.943406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.739 [2024-10-01 15:58:57.943582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.739 [2024-10-01 15:58:57.943593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.739 [2024-10-01 15:58:57.943599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.739 [2024-10-01 15:58:57.943608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.739 [2024-10-01 15:58:57.943624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.739 [2024-10-01 15:58:57.943630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.739 [2024-10-01 15:58:57.943773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.739 [2024-10-01 15:58:57.943782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.739 [2024-10-01 15:58:57.953506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.739 [2024-10-01 15:58:57.953526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.739 [2024-10-01 15:58:57.953738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-10-01 15:58:57.953751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.739 [2024-10-01 15:58:57.953758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.739 [2024-10-01 15:58:57.953904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-10-01 15:58:57.953914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.739 [2024-10-01 15:58:57.953921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.739 [2024-10-01 15:58:57.953932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.739 [2024-10-01 15:58:57.953941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.739 [2024-10-01 15:58:57.953951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.739 [2024-10-01 15:58:57.953957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.739 [2024-10-01 15:58:57.953964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.739 [2024-10-01 15:58:57.953972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.739 [2024-10-01 15:58:57.953978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.739 [2024-10-01 15:58:57.953984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.739 [2024-10-01 15:58:57.953998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.739 [2024-10-01 15:58:57.954004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.739 [2024-10-01 15:58:57.965900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.739 [2024-10-01 15:58:57.965922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.739 [2024-10-01 15:58:57.966267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-10-01 15:58:57.966283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.739 [2024-10-01 15:58:57.966291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.739 [2024-10-01 15:58:57.966429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-10-01 15:58:57.966439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.739 [2024-10-01 15:58:57.966446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.739 [2024-10-01 15:58:57.966593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.739 [2024-10-01 15:58:57.966609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.739 [2024-10-01 15:58:57.966757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.739 [2024-10-01 15:58:57.966768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.739 [2024-10-01 15:58:57.966774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.739 [2024-10-01 15:58:57.966784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.739 [2024-10-01 15:58:57.966790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.739 [2024-10-01 15:58:57.966796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.739 [2024-10-01 15:58:57.966826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.739 [2024-10-01 15:58:57.966834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.739 [2024-10-01 15:58:57.976893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.739 [2024-10-01 15:58:57.976914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.739 [2024-10-01 15:58:57.977129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-10-01 15:58:57.977141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.739 [2024-10-01 15:58:57.977149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.739 [2024-10-01 15:58:57.977293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.739 [2024-10-01 15:58:57.977303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.739 [2024-10-01 15:58:57.977309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.739 [2024-10-01 15:58:57.977321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.739 [2024-10-01 15:58:57.977330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.739 [2024-10-01 15:58:57.977340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.739 [2024-10-01 15:58:57.977346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.739 [2024-10-01 15:58:57.977353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.739 [2024-10-01 15:58:57.977361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.739 [2024-10-01 15:58:57.977367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.739 [2024-10-01 15:58:57.977373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.739 [2024-10-01 15:58:57.977387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.739 [2024-10-01 15:58:57.977393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.739 11337.50 IOPS, 44.29 MiB/s [2024-10-01 15:58:57.987691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.739 [2024-10-01 15:58:57.987713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.739 [2024-10-01 15:58:57.988064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-10-01 15:58:57.988084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.740 [2024-10-01 15:58:57.988092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.740 [2024-10-01 15:58:57.988309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-10-01 15:58:57.988320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.740 [2024-10-01 15:58:57.988327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.740 [2024-10-01 15:58:57.988526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.740 [2024-10-01 15:58:57.988540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.740 [2024-10-01 15:58:57.988684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.740 [2024-10-01 15:58:57.988694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.740 [2024-10-01 15:58:57.988702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.740 [2024-10-01 15:58:57.988712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.740 [2024-10-01 15:58:57.988718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.740 [2024-10-01 15:58:57.988724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.740 [2024-10-01 15:58:57.988753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.740 [2024-10-01 15:58:57.988761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.740 [2024-10-01 15:58:57.998737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.740 [2024-10-01 15:58:57.998758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.740 [2024-10-01 15:58:57.998976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-10-01 15:58:57.998989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.740 [2024-10-01 15:58:57.998996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.740 [2024-10-01 15:58:57.999141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-10-01 15:58:57.999151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.740 [2024-10-01 15:58:57.999157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.740 [2024-10-01 15:58:57.999169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.740 [2024-10-01 15:58:57.999178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.740 [2024-10-01 15:58:57.999187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.740 [2024-10-01 15:58:57.999194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.740 [2024-10-01 15:58:57.999200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.740 [2024-10-01 15:58:57.999209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.740 [2024-10-01 15:58:57.999215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.740 [2024-10-01 15:58:57.999224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.740 [2024-10-01 15:58:57.999238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.740 [2024-10-01 15:58:57.999244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.740 [2024-10-01 15:58:58.010729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.740 [2024-10-01 15:58:58.010750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.740 [2024-10-01 15:58:58.011130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-10-01 15:58:58.011147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.740 [2024-10-01 15:58:58.011154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.740 [2024-10-01 15:58:58.011383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-10-01 15:58:58.011394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.740 [2024-10-01 15:58:58.011401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.740 [2024-10-01 15:58:58.011429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.740 [2024-10-01 15:58:58.011439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.740 [2024-10-01 15:58:58.011458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.740 [2024-10-01 15:58:58.011465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.740 [2024-10-01 15:58:58.011472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.740 [2024-10-01 15:58:58.011480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.740 [2024-10-01 15:58:58.011486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.740 [2024-10-01 15:58:58.011492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.740 [2024-10-01 15:58:58.011506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.740 [2024-10-01 15:58:58.011512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.740 [2024-10-01 15:58:58.021049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.740 [2024-10-01 15:58:58.021070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.740 [2024-10-01 15:58:58.021282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-10-01 15:58:58.021295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.740 [2024-10-01 15:58:58.021302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.740 [2024-10-01 15:58:58.021439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-10-01 15:58:58.021449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.740 [2024-10-01 15:58:58.021455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.740 [2024-10-01 15:58:58.021466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.740 [2024-10-01 15:58:58.021479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.740 [2024-10-01 15:58:58.021489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.740 [2024-10-01 15:58:58.021495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.740 [2024-10-01 15:58:58.021501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.740 [2024-10-01 15:58:58.021510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.740 [2024-10-01 15:58:58.021515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.740 [2024-10-01 15:58:58.021521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.740 [2024-10-01 15:58:58.022493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.740 [2024-10-01 15:58:58.022507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.740 [2024-10-01 15:58:58.032946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.740 [2024-10-01 15:58:58.032968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.740 [2024-10-01 15:58:58.033358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-10-01 15:58:58.033374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.740 [2024-10-01 15:58:58.033382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.740 [2024-10-01 15:58:58.033582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-10-01 15:58:58.033592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.740 [2024-10-01 15:58:58.033599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.740 [2024-10-01 15:58:58.033849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.740 [2024-10-01 15:58:58.033867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.740 [2024-10-01 15:58:58.034017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.740 [2024-10-01 15:58:58.034027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.740 [2024-10-01 15:58:58.034033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.740 [2024-10-01 15:58:58.034042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.740 [2024-10-01 15:58:58.034048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.740 [2024-10-01 15:58:58.034054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.740 [2024-10-01 15:58:58.034084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.740 [2024-10-01 15:58:58.034091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.740 [2024-10-01 15:58:58.044844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.740 [2024-10-01 15:58:58.044869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.740 [2024-10-01 15:58:58.045081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.740 [2024-10-01 15:58:58.045093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.740 [2024-10-01 15:58:58.045104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.740 [2024-10-01 15:58:58.045189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-10-01 15:58:58.045198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.741 [2024-10-01 15:58:58.045205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.741 [2024-10-01 15:58:58.045217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.741 [2024-10-01 15:58:58.045226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.741 [2024-10-01 15:58:58.045237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.741 [2024-10-01 15:58:58.045243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.741 [2024-10-01 15:58:58.045249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.741 [2024-10-01 15:58:58.045257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.741 [2024-10-01 15:58:58.045263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.741 [2024-10-01 15:58:58.045269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.741 [2024-10-01 15:58:58.045283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.741 [2024-10-01 15:58:58.045289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.741 [2024-10-01 15:58:58.057192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.741 [2024-10-01 15:58:58.057213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.741 [2024-10-01 15:58:58.057451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-10-01 15:58:58.057463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.741 [2024-10-01 15:58:58.057471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.741 [2024-10-01 15:58:58.057613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-10-01 15:58:58.057623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.741 [2024-10-01 15:58:58.057630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.741 [2024-10-01 15:58:58.057642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.741 [2024-10-01 15:58:58.057651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.741 [2024-10-01 15:58:58.057661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.741 [2024-10-01 15:58:58.057667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.741 [2024-10-01 15:58:58.057673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.741 [2024-10-01 15:58:58.057682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.741 [2024-10-01 15:58:58.057688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.741 [2024-10-01 15:58:58.057694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.741 [2024-10-01 15:58:58.057711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.741 [2024-10-01 15:58:58.057717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.741 [2024-10-01 15:58:58.069783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.741 [2024-10-01 15:58:58.069804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.741 [2024-10-01 15:58:58.069972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-10-01 15:58:58.069986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.741 [2024-10-01 15:58:58.069993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.741 [2024-10-01 15:58:58.070185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-10-01 15:58:58.070194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.741 [2024-10-01 15:58:58.070201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.741 [2024-10-01 15:58:58.070212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.741 [2024-10-01 15:58:58.070221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.741 [2024-10-01 15:58:58.070231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.741 [2024-10-01 15:58:58.070237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.741 [2024-10-01 15:58:58.070243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.741 [2024-10-01 15:58:58.070252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.741 [2024-10-01 15:58:58.070257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.741 [2024-10-01 15:58:58.070263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.741 [2024-10-01 15:58:58.070276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.741 [2024-10-01 15:58:58.070283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.741 [2024-10-01 15:58:58.082266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.741 [2024-10-01 15:58:58.082287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.741 [2024-10-01 15:58:58.082529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-10-01 15:58:58.082541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.741 [2024-10-01 15:58:58.082548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.741 [2024-10-01 15:58:58.082766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-10-01 15:58:58.082775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.741 [2024-10-01 15:58:58.082782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.741 [2024-10-01 15:58:58.082793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.741 [2024-10-01 15:58:58.082802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.741 [2024-10-01 15:58:58.082817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.741 [2024-10-01 15:58:58.082823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.741 [2024-10-01 15:58:58.082830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.741 [2024-10-01 15:58:58.082838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.741 [2024-10-01 15:58:58.082844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.741 [2024-10-01 15:58:58.082850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.741 [2024-10-01 15:58:58.082868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.741 [2024-10-01 15:58:58.082875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.741 [2024-10-01 15:58:58.092345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.741 [2024-10-01 15:58:58.092375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.741 [2024-10-01 15:58:58.092631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-10-01 15:58:58.092644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.741 [2024-10-01 15:58:58.092652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.741 [2024-10-01 15:58:58.092847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.741 [2024-10-01 15:58:58.092858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.741 [2024-10-01 15:58:58.092870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.741 [2024-10-01 15:58:58.092879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.741 [2024-10-01 15:58:58.094568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.741 [2024-10-01 15:58:58.094587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.741 [2024-10-01 15:58:58.094593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.741 [2024-10-01 15:58:58.094600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.741 [2024-10-01 15:58:58.095765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.741 [2024-10-01 15:58:58.095782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.741 [2024-10-01 15:58:58.095788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.741 [2024-10-01 15:58:58.095794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.742 [2024-10-01 15:58:58.096180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.742 [2024-10-01 15:58:58.103402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.742 [2024-10-01 15:58:58.103422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.742 [2024-10-01 15:58:58.103724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-10-01 15:58:58.103739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.742 [2024-10-01 15:58:58.103746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.742 [2024-10-01 15:58:58.103827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-10-01 15:58:58.103837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.742 [2024-10-01 15:58:58.103843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.742 [2024-10-01 15:58:58.105558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.742 [2024-10-01 15:58:58.105577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.742 [2024-10-01 15:58:58.106486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.742 [2024-10-01 15:58:58.106498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.742 [2024-10-01 15:58:58.106505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.742 [2024-10-01 15:58:58.106515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.742 [2024-10-01 15:58:58.106521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.742 [2024-10-01 15:58:58.106527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.742 [2024-10-01 15:58:58.107048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.742 [2024-10-01 15:58:58.107061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.742 [2024-10-01 15:58:58.116398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.742 [2024-10-01 15:58:58.116419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.742 [2024-10-01 15:58:58.116656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-10-01 15:58:58.116668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.742 [2024-10-01 15:58:58.116676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.742 [2024-10-01 15:58:58.116839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-10-01 15:58:58.116848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.742 [2024-10-01 15:58:58.116855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.742 [2024-10-01 15:58:58.116872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.742 [2024-10-01 15:58:58.116881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.742 [2024-10-01 15:58:58.116891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.742 [2024-10-01 15:58:58.116897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.742 [2024-10-01 15:58:58.116903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.742 [2024-10-01 15:58:58.116911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.742 [2024-10-01 15:58:58.116917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.742 [2024-10-01 15:58:58.116923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.742 [2024-10-01 15:58:58.116936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.742 [2024-10-01 15:58:58.116949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.742 [2024-10-01 15:58:58.128827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.742 [2024-10-01 15:58:58.128848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.742 [2024-10-01 15:58:58.129075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-10-01 15:58:58.129088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.742 [2024-10-01 15:58:58.129095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.742 [2024-10-01 15:58:58.129311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-10-01 15:58:58.129322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.742 [2024-10-01 15:58:58.129328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.742 [2024-10-01 15:58:58.129783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.742 [2024-10-01 15:58:58.129796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.742 [2024-10-01 15:58:58.129969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.742 [2024-10-01 15:58:58.129980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.742 [2024-10-01 15:58:58.129987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.742 [2024-10-01 15:58:58.129996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.742 [2024-10-01 15:58:58.130002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.742 [2024-10-01 15:58:58.130008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.742 [2024-10-01 15:58:58.130150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.742 [2024-10-01 15:58:58.130160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.742 [2024-10-01 15:58:58.139614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.742 [2024-10-01 15:58:58.139634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.742 [2024-10-01 15:58:58.139873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-10-01 15:58:58.139886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.742 [2024-10-01 15:58:58.139893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.742 [2024-10-01 15:58:58.140037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-10-01 15:58:58.140046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.742 [2024-10-01 15:58:58.140053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.742 [2024-10-01 15:58:58.140064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.742 [2024-10-01 15:58:58.140074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.742 [2024-10-01 15:58:58.140083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.742 [2024-10-01 15:58:58.140093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.742 [2024-10-01 15:58:58.140100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.742 [2024-10-01 15:58:58.140108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.742 [2024-10-01 15:58:58.140114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.742 [2024-10-01 15:58:58.140120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.742 [2024-10-01 15:58:58.140133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.742 [2024-10-01 15:58:58.140140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.742 [2024-10-01 15:58:58.152627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.742 [2024-10-01 15:58:58.152650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.742 [2024-10-01 15:58:58.152935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-10-01 15:58:58.152949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.742 [2024-10-01 15:58:58.152956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.742 [2024-10-01 15:58:58.153100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-10-01 15:58:58.153110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.742 [2024-10-01 15:58:58.153116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.742 [2024-10-01 15:58:58.153308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.742 [2024-10-01 15:58:58.153321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.742 [2024-10-01 15:58:58.153415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.742 [2024-10-01 15:58:58.153423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.742 [2024-10-01 15:58:58.153429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.742 [2024-10-01 15:58:58.153438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.742 [2024-10-01 15:58:58.153444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.742 [2024-10-01 15:58:58.153450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.742 [2024-10-01 15:58:58.153619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.742 [2024-10-01 15:58:58.153629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.742 [2024-10-01 15:58:58.164121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.743 [2024-10-01 15:58:58.164142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.743 [2024-10-01 15:58:58.164808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-10-01 15:58:58.164826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.743 [2024-10-01 15:58:58.164833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.743 [2024-10-01 15:58:58.164981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-10-01 15:58:58.164995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.743 [2024-10-01 15:58:58.165002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.743 [2024-10-01 15:58:58.165285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.743 [2024-10-01 15:58:58.165299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.743 [2024-10-01 15:58:58.165335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.743 [2024-10-01 15:58:58.165342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.743 [2024-10-01 15:58:58.165349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.743 [2024-10-01 15:58:58.165358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.743 [2024-10-01 15:58:58.165364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.743 [2024-10-01 15:58:58.165371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.743 [2024-10-01 15:58:58.165384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.743 [2024-10-01 15:58:58.165391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.743 [2024-10-01 15:58:58.174352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.743 [2024-10-01 15:58:58.174373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.743 [2024-10-01 15:58:58.174609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-10-01 15:58:58.174622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.743 [2024-10-01 15:58:58.174629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.743 [2024-10-01 15:58:58.174822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-10-01 15:58:58.174833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.743 [2024-10-01 15:58:58.174840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.743 [2024-10-01 15:58:58.174851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.743 [2024-10-01 15:58:58.174861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.743 [2024-10-01 15:58:58.174876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.743 [2024-10-01 15:58:58.174882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.743 [2024-10-01 15:58:58.174889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.743 [2024-10-01 15:58:58.174897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.743 [2024-10-01 15:58:58.174903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.743 [2024-10-01 15:58:58.174909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.743 [2024-10-01 15:58:58.174923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.743 [2024-10-01 15:58:58.174930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.743 [2024-10-01 15:58:58.184932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.743 [2024-10-01 15:58:58.184953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.743 [2024-10-01 15:58:58.185143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-10-01 15:58:58.185156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.743 [2024-10-01 15:58:58.185164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.743 [2024-10-01 15:58:58.185378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-10-01 15:58:58.185388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.743 [2024-10-01 15:58:58.185394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.743 [2024-10-01 15:58:58.185406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.743 [2024-10-01 15:58:58.185415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.743 [2024-10-01 15:58:58.185425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.743 [2024-10-01 15:58:58.185432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.743 [2024-10-01 15:58:58.185439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.743 [2024-10-01 15:58:58.185448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.743 [2024-10-01 15:58:58.185453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.743 [2024-10-01 15:58:58.185459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.743 [2024-10-01 15:58:58.185473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.743 [2024-10-01 15:58:58.185480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.743 [2024-10-01 15:58:58.195884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.743 [2024-10-01 15:58:58.195904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.743 [2024-10-01 15:58:58.196074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-10-01 15:58:58.196086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.743 [2024-10-01 15:58:58.196094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.743 [2024-10-01 15:58:58.196308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-10-01 15:58:58.196319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.743 [2024-10-01 15:58:58.196325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.743 [2024-10-01 15:58:58.196337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.743 [2024-10-01 15:58:58.196346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.743 [2024-10-01 15:58:58.196356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.743 [2024-10-01 15:58:58.196362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.743 [2024-10-01 15:58:58.196372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.743 [2024-10-01 15:58:58.196380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.743 [2024-10-01 15:58:58.196386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.743 [2024-10-01 15:58:58.196392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.743 [2024-10-01 15:58:58.196406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.743 [2024-10-01 15:58:58.196412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.743 [2024-10-01 15:58:58.206194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.743 [2024-10-01 15:58:58.206215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.743 [2024-10-01 15:58:58.206447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-10-01 15:58:58.206460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.743 [2024-10-01 15:58:58.206468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.743 [2024-10-01 15:58:58.206602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-10-01 15:58:58.206612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.743 [2024-10-01 15:58:58.206618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.743 [2024-10-01 15:58:58.206630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.743 [2024-10-01 15:58:58.206639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.743 [2024-10-01 15:58:58.206649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.743 [2024-10-01 15:58:58.206655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.743 [2024-10-01 15:58:58.206662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.743 [2024-10-01 15:58:58.206671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.743 [2024-10-01 15:58:58.206677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.743 [2024-10-01 15:58:58.206682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.743 [2024-10-01 15:58:58.206696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.743 [2024-10-01 15:58:58.206703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.743 [2024-10-01 15:58:58.217934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.743 [2024-10-01 15:58:58.217955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.743 [2024-10-01 15:58:58.218261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.743 [2024-10-01 15:58:58.218276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.744 [2024-10-01 15:58:58.218284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.744 [2024-10-01 15:58:58.218498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-10-01 15:58:58.218508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.744 [2024-10-01 15:58:58.218519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.744 [2024-10-01 15:58:58.219208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.744 [2024-10-01 15:58:58.219226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.744 [2024-10-01 15:58:58.219540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.744 [2024-10-01 15:58:58.219551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.744 [2024-10-01 15:58:58.219558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.744 [2024-10-01 15:58:58.219567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.744 [2024-10-01 15:58:58.219573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.744 [2024-10-01 15:58:58.219580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.744 [2024-10-01 15:58:58.219622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.744 [2024-10-01 15:58:58.219630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.744 [2024-10-01 15:58:58.228015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.744 [2024-10-01 15:58:58.228044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.744 [2024-10-01 15:58:58.228275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-10-01 15:58:58.228295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.744 [2024-10-01 15:58:58.228303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.744 [2024-10-01 15:58:58.228522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-10-01 15:58:58.228533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.744 [2024-10-01 15:58:58.228540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.744 [2024-10-01 15:58:58.228548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.744 [2024-10-01 15:58:58.228674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.744 [2024-10-01 15:58:58.228685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.744 [2024-10-01 15:58:58.228691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.744 [2024-10-01 15:58:58.228697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.744 [2024-10-01 15:58:58.228851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.744 [2024-10-01 15:58:58.228861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.744 [2024-10-01 15:58:58.228873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.744 [2024-10-01 15:58:58.228880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.744 [2024-10-01 15:58:58.228905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.744 [2024-10-01 15:58:58.238452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.744 [2024-10-01 15:58:58.238475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.744 [2024-10-01 15:58:58.238651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-10-01 15:58:58.238664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.744 [2024-10-01 15:58:58.238671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.744 [2024-10-01 15:58:58.238810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-10-01 15:58:58.238820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.744 [2024-10-01 15:58:58.238827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.744 [2024-10-01 15:58:58.239325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.744 [2024-10-01 15:58:58.239341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.744 [2024-10-01 15:58:58.239715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.744 [2024-10-01 15:58:58.239726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.744 [2024-10-01 15:58:58.239732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.744 [2024-10-01 15:58:58.239742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.744 [2024-10-01 15:58:58.239748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.744 [2024-10-01 15:58:58.239754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.744 [2024-10-01 15:58:58.239914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.744 [2024-10-01 15:58:58.239925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.744 [2024-10-01 15:58:58.249345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.744 [2024-10-01 15:58:58.249365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.744 [2024-10-01 15:58:58.249553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-10-01 15:58:58.249565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.744 [2024-10-01 15:58:58.249572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.744 [2024-10-01 15:58:58.249735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-10-01 15:58:58.249744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.744 [2024-10-01 15:58:58.249751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.744 [2024-10-01 15:58:58.249763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.744 [2024-10-01 15:58:58.249771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.744 [2024-10-01 15:58:58.249781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.744 [2024-10-01 15:58:58.249787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.744 [2024-10-01 15:58:58.249794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.744 [2024-10-01 15:58:58.249802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.744 [2024-10-01 15:58:58.249811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.744 [2024-10-01 15:58:58.249817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.744 [2024-10-01 15:58:58.249830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.744 [2024-10-01 15:58:58.249837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.744 [2024-10-01 15:58:58.261531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.744 [2024-10-01 15:58:58.261553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.744 [2024-10-01 15:58:58.261844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-10-01 15:58:58.261861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.744 [2024-10-01 15:58:58.261874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.744 [2024-10-01 15:58:58.262067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-10-01 15:58:58.262078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.744 [2024-10-01 15:58:58.262084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.744 [2024-10-01 15:58:58.262289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.744 [2024-10-01 15:58:58.262303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.744 [2024-10-01 15:58:58.262446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.744 [2024-10-01 15:58:58.262457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.744 [2024-10-01 15:58:58.262464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.744 [2024-10-01 15:58:58.262474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.744 [2024-10-01 15:58:58.262480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.744 [2024-10-01 15:58:58.262486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.744 [2024-10-01 15:58:58.262517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.744 [2024-10-01 15:58:58.262525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.744 [2024-10-01 15:58:58.273697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.744 [2024-10-01 15:58:58.273719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.744 [2024-10-01 15:58:58.274064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-10-01 15:58:58.274081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.744 [2024-10-01 15:58:58.274089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.744 [2024-10-01 15:58:58.274282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.744 [2024-10-01 15:58:58.274292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.745 [2024-10-01 15:58:58.274299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.745 [2024-10-01 15:58:58.274451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.745 [2024-10-01 15:58:58.274464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.745 [2024-10-01 15:58:58.274603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.745 [2024-10-01 15:58:58.274613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.745 [2024-10-01 15:58:58.274619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.745 [2024-10-01 15:58:58.274628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.745 [2024-10-01 15:58:58.274635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.745 [2024-10-01 15:58:58.274641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.745 [2024-10-01 15:58:58.274670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.745 [2024-10-01 15:58:58.274679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.745 [2024-10-01 15:58:58.284490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.745 [2024-10-01 15:58:58.284512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.745 [2024-10-01 15:58:58.284726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-10-01 15:58:58.284738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.745 [2024-10-01 15:58:58.284745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.745 [2024-10-01 15:58:58.285002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-10-01 15:58:58.285013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.745 [2024-10-01 15:58:58.285020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.745 [2024-10-01 15:58:58.285032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.745 [2024-10-01 15:58:58.285041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.745 [2024-10-01 15:58:58.285051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.745 [2024-10-01 15:58:58.285057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.745 [2024-10-01 15:58:58.285063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.745 [2024-10-01 15:58:58.285072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.745 [2024-10-01 15:58:58.285077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.745 [2024-10-01 15:58:58.285084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.745 [2024-10-01 15:58:58.285097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.745 [2024-10-01 15:58:58.285104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.745 [2024-10-01 15:58:58.296014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.745 [2024-10-01 15:58:58.296036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.745 [2024-10-01 15:58:58.296197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-10-01 15:58:58.296213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.745 [2024-10-01 15:58:58.296221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.745 [2024-10-01 15:58:58.296360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-10-01 15:58:58.296370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.745 [2024-10-01 15:58:58.296377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.745 [2024-10-01 15:58:58.296388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.745 [2024-10-01 15:58:58.296397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.745 [2024-10-01 15:58:58.296407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.745 [2024-10-01 15:58:58.296413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.745 [2024-10-01 15:58:58.296419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.745 [2024-10-01 15:58:58.296427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.745 [2024-10-01 15:58:58.296433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.745 [2024-10-01 15:58:58.296439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.745 [2024-10-01 15:58:58.296452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.745 [2024-10-01 15:58:58.296459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.745 [2024-10-01 15:58:58.308047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.745 [2024-10-01 15:58:58.308069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.745 [2024-10-01 15:58:58.308262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-10-01 15:58:58.308275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.745 [2024-10-01 15:58:58.308282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.745 [2024-10-01 15:58:58.308445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-10-01 15:58:58.308454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.745 [2024-10-01 15:58:58.308461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.745 [2024-10-01 15:58:58.308472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.745 [2024-10-01 15:58:58.308481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.745 [2024-10-01 15:58:58.308491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.745 [2024-10-01 15:58:58.308498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.745 [2024-10-01 15:58:58.308504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.745 [2024-10-01 15:58:58.308513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.745 [2024-10-01 15:58:58.308518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.745 [2024-10-01 15:58:58.308528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.745 [2024-10-01 15:58:58.308541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.745 [2024-10-01 15:58:58.308548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.745 [2024-10-01 15:58:58.319010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.745 [2024-10-01 15:58:58.319031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.745 [2024-10-01 15:58:58.319396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-10-01 15:58:58.319412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.745 [2024-10-01 15:58:58.319419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.745 [2024-10-01 15:58:58.319588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.745 [2024-10-01 15:58:58.319599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.745 [2024-10-01 15:58:58.319606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.745 [2024-10-01 15:58:58.319749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.745 [2024-10-01 15:58:58.319762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.745 [2024-10-01 15:58:58.319787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.745 [2024-10-01 15:58:58.319795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.745 [2024-10-01 15:58:58.319802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.745 [2024-10-01 15:58:58.319811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.745 [2024-10-01 15:58:58.319816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.745 [2024-10-01 15:58:58.319822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.745 [2024-10-01 15:58:58.319836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.745 [2024-10-01 15:58:58.319843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.745 [2024-10-01 15:58:58.330519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.745 [2024-10-01 15:58:58.330541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.745 [2024-10-01 15:58:58.331135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-10-01 15:58:58.331153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.746 [2024-10-01 15:58:58.331161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.746 [2024-10-01 15:58:58.331386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-10-01 15:58:58.331396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.746 [2024-10-01 15:58:58.331404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.746 [2024-10-01 15:58:58.331560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.746 [2024-10-01 15:58:58.331580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.746 [2024-10-01 15:58:58.331624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.746 [2024-10-01 15:58:58.331631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.746 [2024-10-01 15:58:58.331638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.746 [2024-10-01 15:58:58.331647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.746 [2024-10-01 15:58:58.331653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.746 [2024-10-01 15:58:58.331659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.746 [2024-10-01 15:58:58.331672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.746 [2024-10-01 15:58:58.331679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.746 [2024-10-01 15:58:58.341695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.746 [2024-10-01 15:58:58.341716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.746 [2024-10-01 15:58:58.342054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-10-01 15:58:58.342071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.746 [2024-10-01 15:58:58.342079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.746 [2024-10-01 15:58:58.342221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-10-01 15:58:58.342231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.746 [2024-10-01 15:58:58.342238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.746 [2024-10-01 15:58:58.342385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.746 [2024-10-01 15:58:58.342397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.746 [2024-10-01 15:58:58.342535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.746 [2024-10-01 15:58:58.342545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.746 [2024-10-01 15:58:58.342552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.746 [2024-10-01 15:58:58.342560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.746 [2024-10-01 15:58:58.342566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.746 [2024-10-01 15:58:58.342572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.746 [2024-10-01 15:58:58.342602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.746 [2024-10-01 15:58:58.342610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.746 [2024-10-01 15:58:58.352659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.746 [2024-10-01 15:58:58.352680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.746 [2024-10-01 15:58:58.352937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-10-01 15:58:58.352957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.746 [2024-10-01 15:58:58.352968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.746 [2024-10-01 15:58:58.353177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-10-01 15:58:58.353188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.746 [2024-10-01 15:58:58.353195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.746 [2024-10-01 15:58:58.353326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.746 [2024-10-01 15:58:58.353338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.746 [2024-10-01 15:58:58.353364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.746 [2024-10-01 15:58:58.353372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.746 [2024-10-01 15:58:58.353378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.746 [2024-10-01 15:58:58.353387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.746 [2024-10-01 15:58:58.353393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.746 [2024-10-01 15:58:58.353399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.746 [2024-10-01 15:58:58.353413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.746 [2024-10-01 15:58:58.353420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.746 [2024-10-01 15:58:58.363037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.746 [2024-10-01 15:58:58.363058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.746 [2024-10-01 15:58:58.363338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-10-01 15:58:58.363354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.746 [2024-10-01 15:58:58.363361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.746 [2024-10-01 15:58:58.363451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-10-01 15:58:58.363461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.746 [2024-10-01 15:58:58.363468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.746 [2024-10-01 15:58:58.363672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.746 [2024-10-01 15:58:58.363685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.746 [2024-10-01 15:58:58.363715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.746 [2024-10-01 15:58:58.363722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.746 [2024-10-01 15:58:58.363729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.746 [2024-10-01 15:58:58.363739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.746 [2024-10-01 15:58:58.363744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.746 [2024-10-01 15:58:58.363751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.746 [2024-10-01 15:58:58.363891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.746 [2024-10-01 15:58:58.363901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.746 [2024-10-01 15:58:58.374315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.746 [2024-10-01 15:58:58.374335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.746 [2024-10-01 15:58:58.374585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-10-01 15:58:58.374597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.746 [2024-10-01 15:58:58.374604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.746 [2024-10-01 15:58:58.374799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.746 [2024-10-01 15:58:58.374810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.746 [2024-10-01 15:58:58.374817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.746 [2024-10-01 15:58:58.375267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.746 [2024-10-01 15:58:58.375281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.746 [2024-10-01 15:58:58.375479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.746 [2024-10-01 15:58:58.375489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.746 [2024-10-01 15:58:58.375496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.746 [2024-10-01 15:58:58.375505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.746 [2024-10-01 15:58:58.375511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.746 [2024-10-01 15:58:58.375517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.746 [2024-10-01 15:58:58.375662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.746 [2024-10-01 15:58:58.375671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.746 [2024-10-01 15:58:58.385806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.746 [2024-10-01 15:58:58.385827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.747 [2024-10-01 15:58:58.386175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-10-01 15:58:58.386191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.747 [2024-10-01 15:58:58.386199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.747 [2024-10-01 15:58:58.386394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-10-01 15:58:58.386405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.747 [2024-10-01 15:58:58.386411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.747 [2024-10-01 15:58:58.386693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.747 [2024-10-01 15:58:58.386707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.747 [2024-10-01 15:58:58.386876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.747 [2024-10-01 15:58:58.386888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.747 [2024-10-01 15:58:58.386894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.747 [2024-10-01 15:58:58.386903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.747 [2024-10-01 15:58:58.386909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.747 [2024-10-01 15:58:58.386916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.747 [2024-10-01 15:58:58.387059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.747 [2024-10-01 15:58:58.387069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.747 [2024-10-01 15:58:58.396852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.747 [2024-10-01 15:58:58.396878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.747 [2024-10-01 15:58:58.397088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-10-01 15:58:58.397101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.747 [2024-10-01 15:58:58.397108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.747 [2024-10-01 15:58:58.397245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-10-01 15:58:58.397255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.747 [2024-10-01 15:58:58.397262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.747 [2024-10-01 15:58:58.397273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.747 [2024-10-01 15:58:58.397282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.747 [2024-10-01 15:58:58.397292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.747 [2024-10-01 15:58:58.397299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.747 [2024-10-01 15:58:58.397305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.747 [2024-10-01 15:58:58.397314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.747 [2024-10-01 15:58:58.397319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.747 [2024-10-01 15:58:58.397325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.747 [2024-10-01 15:58:58.397339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.747 [2024-10-01 15:58:58.397346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.747 [2024-10-01 15:58:58.409632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.747 [2024-10-01 15:58:58.409653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.747 [2024-10-01 15:58:58.409805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-10-01 15:58:58.409817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.747 [2024-10-01 15:58:58.409825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.747 [2024-10-01 15:58:58.410048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-10-01 15:58:58.410059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.747 [2024-10-01 15:58:58.410065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.747 [2024-10-01 15:58:58.410077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.747 [2024-10-01 15:58:58.410086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.747 [2024-10-01 15:58:58.410096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.747 [2024-10-01 15:58:58.410102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.747 [2024-10-01 15:58:58.410109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.747 [2024-10-01 15:58:58.410118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.747 [2024-10-01 15:58:58.410123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.747 [2024-10-01 15:58:58.410129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.747 [2024-10-01 15:58:58.410142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.747 [2024-10-01 15:58:58.410150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.747 [2024-10-01 15:58:58.420383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.747 [2024-10-01 15:58:58.420404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.747 [2024-10-01 15:58:58.420566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-10-01 15:58:58.420579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.747 [2024-10-01 15:58:58.420586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.747 [2024-10-01 15:58:58.420782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-10-01 15:58:58.420792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.747 [2024-10-01 15:58:58.420798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.747 [2024-10-01 15:58:58.420810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.747 [2024-10-01 15:58:58.420819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.747 [2024-10-01 15:58:58.420829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.747 [2024-10-01 15:58:58.420835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.747 [2024-10-01 15:58:58.420841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.747 [2024-10-01 15:58:58.420849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.747 [2024-10-01 15:58:58.420855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.747 [2024-10-01 15:58:58.420861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.747 [2024-10-01 15:58:58.420882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.747 [2024-10-01 15:58:58.420892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.747 [2024-10-01 15:58:58.432169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.747 [2024-10-01 15:58:58.432191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.747 [2024-10-01 15:58:58.432469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-10-01 15:58:58.432484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.747 [2024-10-01 15:58:58.432491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.747 [2024-10-01 15:58:58.432732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-10-01 15:58:58.432743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.747 [2024-10-01 15:58:58.432750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.747 [2024-10-01 15:58:58.433667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.747 [2024-10-01 15:58:58.433683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.747 [2024-10-01 15:58:58.434175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.747 [2024-10-01 15:58:58.434187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.747 [2024-10-01 15:58:58.434193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.747 [2024-10-01 15:58:58.434203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.747 [2024-10-01 15:58:58.434209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.747 [2024-10-01 15:58:58.434215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.747 [2024-10-01 15:58:58.434378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.747 [2024-10-01 15:58:58.434388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.747 [2024-10-01 15:58:58.443814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.747 [2024-10-01 15:58:58.443834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.747 [2024-10-01 15:58:58.444818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.747 [2024-10-01 15:58:58.444835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.747 [2024-10-01 15:58:58.444843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.748 [2024-10-01 15:58:58.445003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-10-01 15:58:58.445013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.748 [2024-10-01 15:58:58.445020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.748 [2024-10-01 15:58:58.445488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.748 [2024-10-01 15:58:58.445503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.748 [2024-10-01 15:58:58.445694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.748 [2024-10-01 15:58:58.445708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.748 [2024-10-01 15:58:58.445715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.748 [2024-10-01 15:58:58.445725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.748 [2024-10-01 15:58:58.445731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.748 [2024-10-01 15:58:58.445737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.748 [2024-10-01 15:58:58.445770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.748 [2024-10-01 15:58:58.445778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.748 [2024-10-01 15:58:58.455226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.748 [2024-10-01 15:58:58.455248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.748 [2024-10-01 15:58:58.455654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-10-01 15:58:58.455670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.748 [2024-10-01 15:58:58.455678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.748 [2024-10-01 15:58:58.455824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-10-01 15:58:58.455834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.748 [2024-10-01 15:58:58.455840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.748 [2024-10-01 15:58:58.456099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.748 [2024-10-01 15:58:58.456113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.748 [2024-10-01 15:58:58.456273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.748 [2024-10-01 15:58:58.456283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.748 [2024-10-01 15:58:58.456290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.748 [2024-10-01 15:58:58.456299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.748 [2024-10-01 15:58:58.456306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.748 [2024-10-01 15:58:58.456312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.748 [2024-10-01 15:58:58.456342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.748 [2024-10-01 15:58:58.456349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.748 [2024-10-01 15:58:58.467087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.748 [2024-10-01 15:58:58.467108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.748 [2024-10-01 15:58:58.467432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-10-01 15:58:58.467447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.748 [2024-10-01 15:58:58.467455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.748 [2024-10-01 15:58:58.467579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-10-01 15:58:58.467592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.748 [2024-10-01 15:58:58.467599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.748 [2024-10-01 15:58:58.468273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.748 [2024-10-01 15:58:58.468290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.748 [2024-10-01 15:58:58.468603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.748 [2024-10-01 15:58:58.468614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.748 [2024-10-01 15:58:58.468620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.748 [2024-10-01 15:58:58.468630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.748 [2024-10-01 15:58:58.468637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.748 [2024-10-01 15:58:58.468643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.748 [2024-10-01 15:58:58.468686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.748 [2024-10-01 15:58:58.468694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.748 [2024-10-01 15:58:58.477168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.748 [2024-10-01 15:58:58.477198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.748 [2024-10-01 15:58:58.477479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-10-01 15:58:58.477493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.748 [2024-10-01 15:58:58.477500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.748 [2024-10-01 15:58:58.477847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-10-01 15:58:58.477866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.748 [2024-10-01 15:58:58.477874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.748 [2024-10-01 15:58:58.477883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.748 [2024-10-01 15:58:58.477912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.748 [2024-10-01 15:58:58.477920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.748 [2024-10-01 15:58:58.477926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.748 [2024-10-01 15:58:58.477932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.748 [2024-10-01 15:58:58.477945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.748 [2024-10-01 15:58:58.477952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.748 [2024-10-01 15:58:58.477958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.748 [2024-10-01 15:58:58.477963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.748 [2024-10-01 15:58:58.477975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.748 [2024-10-01 15:58:58.487417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.748 [2024-10-01 15:58:58.487437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.748 [2024-10-01 15:58:58.487599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-10-01 15:58:58.487611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.748 [2024-10-01 15:58:58.487619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.748 [2024-10-01 15:58:58.487832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-10-01 15:58:58.487842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.748 [2024-10-01 15:58:58.487848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.748 [2024-10-01 15:58:58.487860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.748 [2024-10-01 15:58:58.487874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.748 [2024-10-01 15:58:58.487884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.748 [2024-10-01 15:58:58.487890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.748 [2024-10-01 15:58:58.487897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.748 [2024-10-01 15:58:58.487905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.748 [2024-10-01 15:58:58.487910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.748 [2024-10-01 15:58:58.487917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.748 [2024-10-01 15:58:58.487930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.748 [2024-10-01 15:58:58.487937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.748 [2024-10-01 15:58:58.498848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.748 [2024-10-01 15:58:58.498873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.748 [2024-10-01 15:58:58.499040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-10-01 15:58:58.499052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.748 [2024-10-01 15:58:58.499060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.748 [2024-10-01 15:58:58.499228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.748 [2024-10-01 15:58:58.499239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.748 [2024-10-01 15:58:58.499246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.749 [2024-10-01 15:58:58.499257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.749 [2024-10-01 15:58:58.499266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.749 [2024-10-01 15:58:58.499276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.749 [2024-10-01 15:58:58.499282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.749 [2024-10-01 15:58:58.499292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.749 [2024-10-01 15:58:58.499301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.749 [2024-10-01 15:58:58.499307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.749 [2024-10-01 15:58:58.499313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.749 [2024-10-01 15:58:58.499326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.749 [2024-10-01 15:58:58.499332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.749 [2024-10-01 15:58:58.511541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.749 [2024-10-01 15:58:58.511562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.749 [2024-10-01 15:58:58.511972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-10-01 15:58:58.511989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.749 [2024-10-01 15:58:58.511997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.749 [2024-10-01 15:58:58.512211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-10-01 15:58:58.512221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.749 [2024-10-01 15:58:58.512228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.749 [2024-10-01 15:58:58.512482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.749 [2024-10-01 15:58:58.512495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.749 [2024-10-01 15:58:58.512542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.749 [2024-10-01 15:58:58.512551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.749 [2024-10-01 15:58:58.512557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.749 [2024-10-01 15:58:58.512567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.749 [2024-10-01 15:58:58.512572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.749 [2024-10-01 15:58:58.512579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.749 [2024-10-01 15:58:58.512593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.749 [2024-10-01 15:58:58.512599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.749 [2024-10-01 15:58:58.522806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.749 [2024-10-01 15:58:58.522827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.749 [2024-10-01 15:58:58.523127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-10-01 15:58:58.523144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.749 [2024-10-01 15:58:58.523151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.749 [2024-10-01 15:58:58.523366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-10-01 15:58:58.523376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.749 [2024-10-01 15:58:58.523386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.749 [2024-10-01 15:58:58.523530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.749 [2024-10-01 15:58:58.523542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.749 [2024-10-01 15:58:58.523680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.749 [2024-10-01 15:58:58.523690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.749 [2024-10-01 15:58:58.523696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.749 [2024-10-01 15:58:58.523706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.749 [2024-10-01 15:58:58.523713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.749 [2024-10-01 15:58:58.523719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.749 [2024-10-01 15:58:58.523749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.749 [2024-10-01 15:58:58.523756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.749 [2024-10-01 15:58:58.533712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.749 [2024-10-01 15:58:58.533736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.749 [2024-10-01 15:58:58.533929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-10-01 15:58:58.533944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.749 [2024-10-01 15:58:58.533952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.749 [2024-10-01 15:58:58.534056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-10-01 15:58:58.534068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.749 [2024-10-01 15:58:58.534075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.749 [2024-10-01 15:58:58.534206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.749 [2024-10-01 15:58:58.534219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.749 [2024-10-01 15:58:58.534357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.749 [2024-10-01 15:58:58.534367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.749 [2024-10-01 15:58:58.534374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.749 [2024-10-01 15:58:58.534383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.749 [2024-10-01 15:58:58.534390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.749 [2024-10-01 15:58:58.534396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.749 [2024-10-01 15:58:58.534425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.749 [2024-10-01 15:58:58.534433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.749 [2024-10-01 15:58:58.544874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.749 [2024-10-01 15:58:58.544901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.749 [2024-10-01 15:58:58.545298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-10-01 15:58:58.545315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.749 [2024-10-01 15:58:58.545323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.749 [2024-10-01 15:58:58.545551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-10-01 15:58:58.545562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.749 [2024-10-01 15:58:58.545569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.749 [2024-10-01 15:58:58.545600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.749 [2024-10-01 15:58:58.545610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.749 [2024-10-01 15:58:58.545620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.749 [2024-10-01 15:58:58.545626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.749 [2024-10-01 15:58:58.545633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.749 [2024-10-01 15:58:58.545642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.749 [2024-10-01 15:58:58.545648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.749 [2024-10-01 15:58:58.545655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.749 [2024-10-01 15:58:58.545668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.749 [2024-10-01 15:58:58.545675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.749 [2024-10-01 15:58:58.555666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.749 [2024-10-01 15:58:58.555687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.749 [2024-10-01 15:58:58.555872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-10-01 15:58:58.555886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.749 [2024-10-01 15:58:58.555894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.749 [2024-10-01 15:58:58.556081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.749 [2024-10-01 15:58:58.556091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.749 [2024-10-01 15:58:58.556098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.749 [2024-10-01 15:58:58.556492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.749 [2024-10-01 15:58:58.556505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.749 [2024-10-01 15:58:58.556734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.750 [2024-10-01 15:58:58.556744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.750 [2024-10-01 15:58:58.556751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.750 [2024-10-01 15:58:58.556764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.750 [2024-10-01 15:58:58.556770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.750 [2024-10-01 15:58:58.556776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.750 [2024-10-01 15:58:58.556933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.750 [2024-10-01 15:58:58.556944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.750 [2024-10-01 15:58:58.567946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.750 [2024-10-01 15:58:58.567969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.750 [2024-10-01 15:58:58.568376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.750 [2024-10-01 15:58:58.568394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.750 [2024-10-01 15:58:58.568403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.750 [2024-10-01 15:58:58.568614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.750 [2024-10-01 15:58:58.568625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.750 [2024-10-01 15:58:58.568633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.750 [2024-10-01 15:58:58.568665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.750 [2024-10-01 15:58:58.568675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.750 [2024-10-01 15:58:58.568694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.750 [2024-10-01 15:58:58.568702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.750 [2024-10-01 15:58:58.568709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.750 [2024-10-01 15:58:58.568718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.750 [2024-10-01 15:58:58.568725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.750 [2024-10-01 15:58:58.568730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.750 [2024-10-01 15:58:58.568744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.750 [2024-10-01 15:58:58.568752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.750 [2024-10-01 15:58:58.578062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.750 [2024-10-01 15:58:58.578081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.750 [2024-10-01 15:58:58.578242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.750 [2024-10-01 15:58:58.578255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.750 [2024-10-01 15:58:58.578262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.750 [2024-10-01 15:58:58.578395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.750 [2024-10-01 15:58:58.578405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.750 [2024-10-01 15:58:58.578412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.750 [2024-10-01 15:58:58.579132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.750 [2024-10-01 15:58:58.579147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.750 [2024-10-01 15:58:58.579614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.750 [2024-10-01 15:58:58.579625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.750 [2024-10-01 15:58:58.579632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.750 [2024-10-01 15:58:58.579641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.750 [2024-10-01 15:58:58.579647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.750 [2024-10-01 15:58:58.579654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.750 [2024-10-01 15:58:58.579822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.750 [2024-10-01 15:58:58.579831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.750 [2024-10-01 15:58:58.590146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.750 [2024-10-01 15:58:58.590167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.750 [2024-10-01 15:58:58.590517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.750 [2024-10-01 15:58:58.590534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.750 [2024-10-01 15:58:58.590541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.750 [2024-10-01 15:58:58.590735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.750 [2024-10-01 15:58:58.590746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.750 [2024-10-01 15:58:58.590753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.750 [2024-10-01 15:58:58.591050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.750 [2024-10-01 15:58:58.591065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.750 [2024-10-01 15:58:58.591215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.750 [2024-10-01 15:58:58.591225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.750 [2024-10-01 15:58:58.591232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.750 [2024-10-01 15:58:58.591242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.750 [2024-10-01 15:58:58.591248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.750 [2024-10-01 15:58:58.591254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.750 [2024-10-01 15:58:58.591285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.750 [2024-10-01 15:58:58.591292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.750 [2024-10-01 15:58:58.601649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.750 [2024-10-01 15:58:58.601670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.750 [2024-10-01 15:58:58.602090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.750 [2024-10-01 15:58:58.602107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.750 [2024-10-01 15:58:58.602115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.750 [2024-10-01 15:58:58.602260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.750 [2024-10-01 15:58:58.602270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.750 [2024-10-01 15:58:58.602277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.750 [2024-10-01 15:58:58.602530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.750 [2024-10-01 15:58:58.602543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.750 [2024-10-01 15:58:58.602691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.750 [2024-10-01 15:58:58.602701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.750 [2024-10-01 15:58:58.602708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.750 [2024-10-01 15:58:58.602717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.750 [2024-10-01 15:58:58.602724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.750 [2024-10-01 15:58:58.602730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.750 [2024-10-01 15:58:58.602760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.750 [2024-10-01 15:58:58.602768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.750 [2024-10-01 15:58:58.613079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.750 [2024-10-01 15:58:58.613101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.750 [2024-10-01 15:58:58.613479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.750 [2024-10-01 15:58:58.613495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.751 [2024-10-01 15:58:58.613503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.751 [2024-10-01 15:58:58.613640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.751 [2024-10-01 15:58:58.613650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.751 [2024-10-01 15:58:58.613656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.751 [2024-10-01 15:58:58.613840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.751 [2024-10-01 15:58:58.613855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.751 [2024-10-01 15:58:58.614001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.751 [2024-10-01 15:58:58.614012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.751 [2024-10-01 15:58:58.614018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.751 [2024-10-01 15:58:58.614028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.751 [2024-10-01 15:58:58.614034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.751 [2024-10-01 15:58:58.614044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.751 [2024-10-01 15:58:58.614074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.751 [2024-10-01 15:58:58.614082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.751 [2024-10-01 15:58:58.624615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.751 [2024-10-01 15:58:58.624637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.751 [2024-10-01 15:58:58.624971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.751 [2024-10-01 15:58:58.624989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.751 [2024-10-01 15:58:58.624997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.751 [2024-10-01 15:58:58.625213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.751 [2024-10-01 15:58:58.625224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.751 [2024-10-01 15:58:58.625231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.751 [2024-10-01 15:58:58.625485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.751 [2024-10-01 15:58:58.625499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.751 [2024-10-01 15:58:58.625535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.751 [2024-10-01 15:58:58.625543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.751 [2024-10-01 15:58:58.625549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.751 [2024-10-01 15:58:58.625559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.751 [2024-10-01 15:58:58.625565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.751 [2024-10-01 15:58:58.625571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.751 [2024-10-01 15:58:58.625700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.751 [2024-10-01 15:58:58.625709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.751 [2024-10-01 15:58:58.636130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.751 [2024-10-01 15:58:58.636151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.751 [2024-10-01 15:58:58.636508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.751 [2024-10-01 15:58:58.636524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.751 [2024-10-01 15:58:58.636531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.751 [2024-10-01 15:58:58.636744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.751 [2024-10-01 15:58:58.636755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.751 [2024-10-01 15:58:58.636762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.751 [2024-10-01 15:58:58.636951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.751 [2024-10-01 15:58:58.636970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.751 [2024-10-01 15:58:58.637112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.751 [2024-10-01 15:58:58.637123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.751 [2024-10-01 15:58:58.637129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.751 [2024-10-01 15:58:58.637139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.751 [2024-10-01 15:58:58.637145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.751 [2024-10-01 15:58:58.637151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.751 [2024-10-01 15:58:58.637293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.751 [2024-10-01 15:58:58.637302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.751 [2024-10-01 15:58:58.647675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.751 [2024-10-01 15:58:58.647696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.751 [2024-10-01 15:58:58.648081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.751 [2024-10-01 15:58:58.648098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.751 [2024-10-01 15:58:58.648105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.751 [2024-10-01 15:58:58.648271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.751 [2024-10-01 15:58:58.648281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.751 [2024-10-01 15:58:58.648288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.751 [2024-10-01 15:58:58.648570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.751 [2024-10-01 15:58:58.648583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.751 [2024-10-01 15:58:58.648734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.751 [2024-10-01 15:58:58.648744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.751 [2024-10-01 15:58:58.648750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.751 [2024-10-01 15:58:58.648760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.751 [2024-10-01 15:58:58.648766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.751 [2024-10-01 15:58:58.648772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.751 [2024-10-01 15:58:58.648803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.751 [2024-10-01 15:58:58.648810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.751 [2024-10-01 15:58:58.659194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.751 [2024-10-01 15:58:58.659215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.751 [2024-10-01 15:58:58.659563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.751 [2024-10-01 15:58:58.659579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.751 [2024-10-01 15:58:58.659591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.751 [2024-10-01 15:58:58.659809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.751 [2024-10-01 15:58:58.659819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.751 [2024-10-01 15:58:58.659826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.751 [2024-10-01 15:58:58.660083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.751 [2024-10-01 15:58:58.660098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.751 [2024-10-01 15:58:58.660134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.751 [2024-10-01 15:58:58.660142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.751 [2024-10-01 15:58:58.660148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.751 [2024-10-01 15:58:58.660157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.751 [2024-10-01 15:58:58.660163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.751 [2024-10-01 15:58:58.660169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.751 [2024-10-01 15:58:58.660298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.751 [2024-10-01 15:58:58.660307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.751 [2024-10-01 15:58:58.670333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.751 [2024-10-01 15:58:58.670353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.751 [2024-10-01 15:58:58.670590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.751 [2024-10-01 15:58:58.670603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.751 [2024-10-01 15:58:58.670610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.751 [2024-10-01 15:58:58.670803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.751 [2024-10-01 15:58:58.670813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.752 [2024-10-01 15:58:58.670819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.752 [2024-10-01 15:58:58.670831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.752 [2024-10-01 15:58:58.670840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.752 [2024-10-01 15:58:58.670850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.752 [2024-10-01 15:58:58.670856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.752 [2024-10-01 15:58:58.670867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.752 [2024-10-01 15:58:58.670876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.752 [2024-10-01 15:58:58.670882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.752 [2024-10-01 15:58:58.670890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.752 [2024-10-01 15:58:58.670904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.752 [2024-10-01 15:58:58.670911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.752 [2024-10-01 15:58:58.683066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.752 [2024-10-01 15:58:58.683087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.752 [2024-10-01 15:58:58.683323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.752 [2024-10-01 15:58:58.683335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.752 [2024-10-01 15:58:58.683343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.752 [2024-10-01 15:58:58.683549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.752 [2024-10-01 15:58:58.683559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.752 [2024-10-01 15:58:58.683566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.752 [2024-10-01 15:58:58.683578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.752 [2024-10-01 15:58:58.683588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.752 [2024-10-01 15:58:58.683597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.752 [2024-10-01 15:58:58.683604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.752 [2024-10-01 15:58:58.683610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.752 [2024-10-01 15:58:58.683618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.752 [2024-10-01 15:58:58.683624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.752 [2024-10-01 15:58:58.683631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.752 [2024-10-01 15:58:58.683644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.752 [2024-10-01 15:58:58.683651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.752 [2024-10-01 15:58:58.693764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.752 [2024-10-01 15:58:58.693785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.752 [2024-10-01 15:58:58.693999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.752 [2024-10-01 15:58:58.694011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.752 [2024-10-01 15:58:58.694019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.752 [2024-10-01 15:58:58.694213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.752 [2024-10-01 15:58:58.694224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.752 [2024-10-01 15:58:58.694230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.752 [2024-10-01 15:58:58.694243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.752 [2024-10-01 15:58:58.694252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.752 [2024-10-01 15:58:58.694268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.752 [2024-10-01 15:58:58.694274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.752 [2024-10-01 15:58:58.694281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.752 [2024-10-01 15:58:58.694289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.752 [2024-10-01 15:58:58.694295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.752 [2024-10-01 15:58:58.694301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.752 [2024-10-01 15:58:58.694314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.752 [2024-10-01 15:58:58.694321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.752 [2024-10-01 15:58:58.706386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.752 [2024-10-01 15:58:58.706407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.752 [2024-10-01 15:58:58.706750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.752 [2024-10-01 15:58:58.706766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.752 [2024-10-01 15:58:58.706774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.752 [2024-10-01 15:58:58.706991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.752 [2024-10-01 15:58:58.707002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.752 [2024-10-01 15:58:58.707010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.752 [2024-10-01 15:58:58.707363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.752 [2024-10-01 15:58:58.707377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.752 [2024-10-01 15:58:58.707635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.752 [2024-10-01 15:58:58.707646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.752 [2024-10-01 15:58:58.707653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.752 [2024-10-01 15:58:58.707663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.752 [2024-10-01 15:58:58.707669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.752 [2024-10-01 15:58:58.707675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.752 [2024-10-01 15:58:58.707715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.752 [2024-10-01 15:58:58.707723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.752 [2024-10-01 15:58:58.718101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.752 [2024-10-01 15:58:58.718122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.752 [2024-10-01 15:58:58.718458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.752 [2024-10-01 15:58:58.718475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.752 [2024-10-01 15:58:58.718482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.752 [2024-10-01 15:58:58.718707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.752 [2024-10-01 15:58:58.718718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.752 [2024-10-01 15:58:58.718725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.752 [2024-10-01 15:58:58.718879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.752 [2024-10-01 15:58:58.718893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.752 [2024-10-01 15:58:58.719062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.752 [2024-10-01 15:58:58.719073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.752 [2024-10-01 15:58:58.719080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.752 [2024-10-01 15:58:58.719089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.752 [2024-10-01 15:58:58.719095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.752 [2024-10-01 15:58:58.719102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.752 [2024-10-01 15:58:58.719175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.752 [2024-10-01 15:58:58.719185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.752 [2024-10-01 15:58:58.729114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.752 [2024-10-01 15:58:58.729135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.752 [2024-10-01 15:58:58.729339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.752 [2024-10-01 15:58:58.729352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.752 [2024-10-01 15:58:58.729359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.752 [2024-10-01 15:58:58.729493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.752 [2024-10-01 15:58:58.729503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.752 [2024-10-01 15:58:58.729510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.752 [2024-10-01 15:58:58.729639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.752 [2024-10-01 15:58:58.729651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.752 [2024-10-01 15:58:58.729800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.752 [2024-10-01 15:58:58.729811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.752 [2024-10-01 15:58:58.729817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.752 [2024-10-01 15:58:58.729827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.753 [2024-10-01 15:58:58.729833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.753 [2024-10-01 15:58:58.729839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.753 [2024-10-01 15:58:58.729875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.753 [2024-10-01 15:58:58.729886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.753 [2024-10-01 15:58:58.739397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.753 [2024-10-01 15:58:58.739418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.753 [2024-10-01 15:58:58.739663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.753 [2024-10-01 15:58:58.739677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.753 [2024-10-01 15:58:58.739684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.753 [2024-10-01 15:58:58.739901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.753 [2024-10-01 15:58:58.739912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.753 [2024-10-01 15:58:58.739919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.753 [2024-10-01 15:58:58.739931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.753 [2024-10-01 15:58:58.739940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.753 [2024-10-01 15:58:58.739950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.753 [2024-10-01 15:58:58.739956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.753 [2024-10-01 15:58:58.739963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.753 [2024-10-01 15:58:58.739971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.753 [2024-10-01 15:58:58.739977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.753 [2024-10-01 15:58:58.739983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.753 [2024-10-01 15:58:58.739997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.753 [2024-10-01 15:58:58.740003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.753 [2024-10-01 15:58:58.752101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.753 [2024-10-01 15:58:58.752122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.753 [2024-10-01 15:58:58.752281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.753 [2024-10-01 15:58:58.752293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.753 [2024-10-01 15:58:58.752300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.753 [2024-10-01 15:58:58.752516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.753 [2024-10-01 15:58:58.752526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.753 [2024-10-01 15:58:58.752532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.753 [2024-10-01 15:58:58.752544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.753 [2024-10-01 15:58:58.752553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.753 [2024-10-01 15:58:58.752563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.753 [2024-10-01 15:58:58.752572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.753 [2024-10-01 15:58:58.752578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.753 [2024-10-01 15:58:58.752587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.753 [2024-10-01 15:58:58.752592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.753 [2024-10-01 15:58:58.752598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.753 [2024-10-01 15:58:58.752612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.753 [2024-10-01 15:58:58.752619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.753 [2024-10-01 15:58:58.764057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.753 [2024-10-01 15:58:58.764079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.753 [2024-10-01 15:58:58.764475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.753 [2024-10-01 15:58:58.764492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.753 [2024-10-01 15:58:58.764500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.753 [2024-10-01 15:58:58.764667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.753 [2024-10-01 15:58:58.764678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.753 [2024-10-01 15:58:58.764685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.753 [2024-10-01 15:58:58.764833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.753 [2024-10-01 15:58:58.764846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.753 [2024-10-01 15:58:58.764875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.753 [2024-10-01 15:58:58.764883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.753 [2024-10-01 15:58:58.764890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.753 [2024-10-01 15:58:58.764899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.753 [2024-10-01 15:58:58.764905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.753 [2024-10-01 15:58:58.764911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.753 [2024-10-01 15:58:58.764926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.753 [2024-10-01 15:58:58.764932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.753 [2024-10-01 15:58:58.775522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.753 [2024-10-01 15:58:58.775544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.753 [2024-10-01 15:58:58.775840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.753 [2024-10-01 15:58:58.775855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.753 [2024-10-01 15:58:58.775868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.753 [2024-10-01 15:58:58.775955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.753 [2024-10-01 15:58:58.775968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.753 [2024-10-01 15:58:58.775975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.753 [2024-10-01 15:58:58.776119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.753 [2024-10-01 15:58:58.776131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.753 [2024-10-01 15:58:58.776157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.753 [2024-10-01 15:58:58.776164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.753 [2024-10-01 15:58:58.776170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.753 [2024-10-01 15:58:58.776179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.753 [2024-10-01 15:58:58.776185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.753 [2024-10-01 15:58:58.776191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.753 [2024-10-01 15:58:58.776214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.753 [2024-10-01 15:58:58.776221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.753 [2024-10-01 15:58:58.786473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.753 [2024-10-01 15:58:58.786493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.753 [2024-10-01 15:58:58.786707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.753 [2024-10-01 15:58:58.786720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.753 [2024-10-01 15:58:58.786727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.753 [2024-10-01 15:58:58.786987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.753 [2024-10-01 15:58:58.786998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.753 [2024-10-01 15:58:58.787005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.753 [2024-10-01 15:58:58.787017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.753 [2024-10-01 15:58:58.787026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.753 [2024-10-01 15:58:58.787041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.753 [2024-10-01 15:58:58.787048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.753 [2024-10-01 15:58:58.787054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.753 [2024-10-01 15:58:58.787063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.753 [2024-10-01 15:58:58.787069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.753 [2024-10-01 15:58:58.787075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.753 [2024-10-01 15:58:58.787089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.753 [2024-10-01 15:58:58.787095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.753 [2024-10-01 15:58:58.796898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.753 [2024-10-01 15:58:58.796919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.753 [2024-10-01 15:58:58.797128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.753 [2024-10-01 15:58:58.797140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.753 [2024-10-01 15:58:58.797148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.753 [2024-10-01 15:58:58.797337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.753 [2024-10-01 15:58:58.797347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.754 [2024-10-01 15:58:58.797354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.754 [2024-10-01 15:58:58.797365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.754 [2024-10-01 15:58:58.797374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.754 [2024-10-01 15:58:58.797385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.754 [2024-10-01 15:58:58.797391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.754 [2024-10-01 15:58:58.797397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.754 [2024-10-01 15:58:58.797405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.754 [2024-10-01 15:58:58.797411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.754 [2024-10-01 15:58:58.797417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.754 [2024-10-01 15:58:58.797430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.754 [2024-10-01 15:58:58.797436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.754 [2024-10-01 15:58:58.809826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.754 [2024-10-01 15:58:58.809849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.754 [2024-10-01 15:58:58.810538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.754 [2024-10-01 15:58:58.810557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.754 [2024-10-01 15:58:58.810565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.754 [2024-10-01 15:58:58.810713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.754 [2024-10-01 15:58:58.810722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.754 [2024-10-01 15:58:58.810729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.754 [2024-10-01 15:58:58.811037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.754 [2024-10-01 15:58:58.811052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.754 [2024-10-01 15:58:58.811203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.754 [2024-10-01 15:58:58.811213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.754 [2024-10-01 15:58:58.811223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.754 [2024-10-01 15:58:58.811233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.754 [2024-10-01 15:58:58.811239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.754 [2024-10-01 15:58:58.811245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.754 [2024-10-01 15:58:58.811275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.754 [2024-10-01 15:58:58.811283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.754 [2024-10-01 15:58:58.821236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.754 [2024-10-01 15:58:58.821260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.754 [2024-10-01 15:58:58.821658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.754 [2024-10-01 15:58:58.821675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.754 [2024-10-01 15:58:58.821683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.754 [2024-10-01 15:58:58.821878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.754 [2024-10-01 15:58:58.821890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.754 [2024-10-01 15:58:58.821897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.754 [2024-10-01 15:58:58.822046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.754 [2024-10-01 15:58:58.822059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.754 [2024-10-01 15:58:58.822086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.754 [2024-10-01 15:58:58.822093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.754 [2024-10-01 15:58:58.822100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.754 [2024-10-01 15:58:58.822109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.754 [2024-10-01 15:58:58.822116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.754 [2024-10-01 15:58:58.822122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.754 [2024-10-01 15:58:58.822146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.754 [2024-10-01 15:58:58.822153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.754 [2024-10-01 15:58:58.832051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.754 [2024-10-01 15:58:58.832074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.754 [2024-10-01 15:58:58.832194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.754 [2024-10-01 15:58:58.832207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.754 [2024-10-01 15:58:58.832214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.754 [2024-10-01 15:58:58.832436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.754 [2024-10-01 15:58:58.832447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.754 [2024-10-01 15:58:58.832458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.754 [2024-10-01 15:58:58.832471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.754 [2024-10-01 15:58:58.832481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.754 [2024-10-01 15:58:58.832491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.754 [2024-10-01 15:58:58.832497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.754 [2024-10-01 15:58:58.832503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.754 [2024-10-01 15:58:58.832512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.754 [2024-10-01 15:58:58.832517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.754 [2024-10-01 15:58:58.832523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.754 [2024-10-01 15:58:58.832537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.754 [2024-10-01 15:58:58.832544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.754 [2024-10-01 15:58:58.843051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.754 [2024-10-01 15:58:58.843074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.754 [2024-10-01 15:58:58.843250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.754 [2024-10-01 15:58:58.843265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.754 [2024-10-01 15:58:58.843273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.754 [2024-10-01 15:58:58.843466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.754 [2024-10-01 15:58:58.843476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.754 [2024-10-01 15:58:58.843483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.754 [2024-10-01 15:58:58.843644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.754 [2024-10-01 15:58:58.843658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.754 [2024-10-01 15:58:58.843809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.754 [2024-10-01 15:58:58.843821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.754 [2024-10-01 15:58:58.843827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.754 [2024-10-01 15:58:58.843837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.754 [2024-10-01 15:58:58.843844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.754 [2024-10-01 15:58:58.843850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.754 [2024-10-01 15:58:58.843887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.754 [2024-10-01 15:58:58.843895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.754 [2024-10-01 15:58:58.853134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.754 [2024-10-01 15:58:58.854122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.754 [2024-10-01 15:58:58.854293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.754 [2024-10-01 15:58:58.854308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.754 [2024-10-01 15:58:58.854315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.754 [2024-10-01 15:58:58.854886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.755 [2024-10-01 15:58:58.854903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.755 [2024-10-01 15:58:58.854911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.755 [2024-10-01 15:58:58.854920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.755 [2024-10-01 15:58:58.855221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.755 [2024-10-01 15:58:58.855234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.755 [2024-10-01 15:58:58.855240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.755 [2024-10-01 15:58:58.855247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.755 [2024-10-01 15:58:58.855289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.755 [2024-10-01 15:58:58.855297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.755 [2024-10-01 15:58:58.855303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.755 [2024-10-01 15:58:58.855309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.755 [2024-10-01 15:58:58.855322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.755 [2024-10-01 15:58:58.864723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.755 [2024-10-01 15:58:58.864899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.755 [2024-10-01 15:58:58.865084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.755 [2024-10-01 15:58:58.865099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.755 [2024-10-01 15:58:58.865107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.755 [2024-10-01 15:58:58.865425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.755 [2024-10-01 15:58:58.865440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.755 [2024-10-01 15:58:58.865448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.755 [2024-10-01 15:58:58.865457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.755 [2024-10-01 15:58:58.865601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.755 [2024-10-01 15:58:58.865611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.755 [2024-10-01 15:58:58.865618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.755 [2024-10-01 15:58:58.865624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.755 [2024-10-01 15:58:58.865769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.755 [2024-10-01 15:58:58.865779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.755 [2024-10-01 15:58:58.865785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.755 [2024-10-01 15:58:58.865791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.755 [2024-10-01 15:58:58.865820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.755 [2024-10-01 15:58:58.876132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.755 [2024-10-01 15:58:58.876153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.755 [2024-10-01 15:58:58.876604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.755 [2024-10-01 15:58:58.876620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.755 [2024-10-01 15:58:58.876628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.755 [2024-10-01 15:58:58.876813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.755 [2024-10-01 15:58:58.876824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.755 [2024-10-01 15:58:58.876831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.755 [2024-10-01 15:58:58.876984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.755 [2024-10-01 15:58:58.876997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.755 [2024-10-01 15:58:58.877024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.755 [2024-10-01 15:58:58.877031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.755 [2024-10-01 15:58:58.877039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.755 [2024-10-01 15:58:58.877047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.755 [2024-10-01 15:58:58.877053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.755 [2024-10-01 15:58:58.877059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.755 [2024-10-01 15:58:58.877240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.755 [2024-10-01 15:58:58.877250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.755 [2024-10-01 15:58:58.887562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.755 [2024-10-01 15:58:58.887584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.755 [2024-10-01 15:58:58.888097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.755 [2024-10-01 15:58:58.888114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.755 [2024-10-01 15:58:58.888122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.755 [2024-10-01 15:58:58.888222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.755 [2024-10-01 15:58:58.888232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.755 [2024-10-01 15:58:58.888239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.755 [2024-10-01 15:58:58.888405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.755 [2024-10-01 15:58:58.888417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.755 [2024-10-01 15:58:58.888443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.755 [2024-10-01 15:58:58.888450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.755 [2024-10-01 15:58:58.888457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.755 [2024-10-01 15:58:58.888466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.755 [2024-10-01 15:58:58.888472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.755 [2024-10-01 15:58:58.888478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.755 [2024-10-01 15:58:58.888491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.755 [2024-10-01 15:58:58.888498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.755 [2024-10-01 15:58:58.899201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.755 [2024-10-01 15:58:58.899223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.755 [2024-10-01 15:58:58.899533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.755 [2024-10-01 15:58:58.899550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.755 [2024-10-01 15:58:58.899557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.755 [2024-10-01 15:58:58.899784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.755 [2024-10-01 15:58:58.899794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.755 [2024-10-01 15:58:58.899801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.755 [2024-10-01 15:58:58.899829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.755 [2024-10-01 15:58:58.899840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.755 [2024-10-01 15:58:58.899849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.755 [2024-10-01 15:58:58.899856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.755 [2024-10-01 15:58:58.899867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.755 [2024-10-01 15:58:58.899876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.755 [2024-10-01 15:58:58.899882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.755 [2024-10-01 15:58:58.899888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.755 [2024-10-01 15:58:58.899902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.755 [2024-10-01 15:58:58.899909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.755 [2024-10-01 15:58:58.909738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.755 [2024-10-01 15:58:58.909759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.755 [2024-10-01 15:58:58.909991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.755 [2024-10-01 15:58:58.910004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.755 [2024-10-01 15:58:58.910011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.755 [2024-10-01 15:58:58.910160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.755 [2024-10-01 15:58:58.910170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.755 [2024-10-01 15:58:58.910177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.755 [2024-10-01 15:58:58.910308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.755 [2024-10-01 15:58:58.910320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.755 [2024-10-01 15:58:58.910346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.755 [2024-10-01 15:58:58.910354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.755 [2024-10-01 15:58:58.910360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.755 [2024-10-01 15:58:58.910369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.755 [2024-10-01 15:58:58.910375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.755 [2024-10-01 15:58:58.910381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.755 [2024-10-01 15:58:58.910394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.755 [2024-10-01 15:58:58.910400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.755 [2024-10-01 15:58:58.921024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.755 [2024-10-01 15:58:58.921046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.756 [2024-10-01 15:58:58.921375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.756 [2024-10-01 15:58:58.921391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.756 [2024-10-01 15:58:58.921399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.756 [2024-10-01 15:58:58.921472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.756 [2024-10-01 15:58:58.921481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.756 [2024-10-01 15:58:58.921488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.756 [2024-10-01 15:58:58.921632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.756 [2024-10-01 15:58:58.921645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.756 [2024-10-01 15:58:58.921791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.756 [2024-10-01 15:58:58.921800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.756 [2024-10-01 15:58:58.921807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.756 [2024-10-01 15:58:58.921816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.756 [2024-10-01 15:58:58.921826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.756 [2024-10-01 15:58:58.921832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.756 [2024-10-01 15:58:58.921859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.756 [2024-10-01 15:58:58.921873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.756 [2024-10-01 15:58:58.931785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.756 [2024-10-01 15:58:58.931805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.756 [2024-10-01 15:58:58.931988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.756 [2024-10-01 15:58:58.932001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.756 [2024-10-01 15:58:58.932008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.756 [2024-10-01 15:58:58.932154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.756 [2024-10-01 15:58:58.932164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.756 [2024-10-01 15:58:58.932170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.756 [2024-10-01 15:58:58.932508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.756 [2024-10-01 15:58:58.932522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.756 [2024-10-01 15:58:58.932681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.756 [2024-10-01 15:58:58.932691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.756 [2024-10-01 15:58:58.932698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.756 [2024-10-01 15:58:58.932707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.756 [2024-10-01 15:58:58.932713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.756 [2024-10-01 15:58:58.932719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.756 [2024-10-01 15:58:58.932899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.756 [2024-10-01 15:58:58.932909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.756 [2024-10-01 15:58:58.942501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.756 [2024-10-01 15:58:58.942521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.756 [2024-10-01 15:58:58.942688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.756 [2024-10-01 15:58:58.942700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.756 [2024-10-01 15:58:58.942707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.756 [2024-10-01 15:58:58.942874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.756 [2024-10-01 15:58:58.942885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.756 [2024-10-01 15:58:58.942892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.756 [2024-10-01 15:58:58.942904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.756 [2024-10-01 15:58:58.942916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.756 [2024-10-01 15:58:58.942926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.756 [2024-10-01 15:58:58.942932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.756 [2024-10-01 15:58:58.942938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.756 [2024-10-01 15:58:58.942947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.756 [2024-10-01 15:58:58.942953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.756 [2024-10-01 15:58:58.942959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.756 [2024-10-01 15:58:58.942973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.756 [2024-10-01 15:58:58.942979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.756 [2024-10-01 15:58:58.954764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.756 [2024-10-01 15:58:58.954785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.756 [2024-10-01 15:58:58.955164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.756 [2024-10-01 15:58:58.955181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.756 [2024-10-01 15:58:58.955188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.756 [2024-10-01 15:58:58.955360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.756 [2024-10-01 15:58:58.955371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.756 [2024-10-01 15:58:58.955378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.756 [2024-10-01 15:58:58.955561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.756 [2024-10-01 15:58:58.955576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.756 [2024-10-01 15:58:58.955716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.756 [2024-10-01 15:58:58.955727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.756 [2024-10-01 15:58:58.955734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.756 [2024-10-01 15:58:58.955743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.756 [2024-10-01 15:58:58.955749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.756 [2024-10-01 15:58:58.955755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.756 [2024-10-01 15:58:58.955905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.756 [2024-10-01 15:58:58.955916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.756 [2024-10-01 15:58:58.966317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.756 [2024-10-01 15:58:58.966339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.756 [2024-10-01 15:58:58.966679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.756 [2024-10-01 15:58:58.966696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.756 [2024-10-01 15:58:58.966707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.756 [2024-10-01 15:58:58.966871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.756 [2024-10-01 15:58:58.966881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.756 [2024-10-01 15:58:58.966887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.756 [2024-10-01 15:58:58.967142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.756 [2024-10-01 15:58:58.967156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.756 [2024-10-01 15:58:58.967192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.756 [2024-10-01 15:58:58.967200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.756 [2024-10-01 15:58:58.967207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.756 [2024-10-01 15:58:58.967216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.756 [2024-10-01 15:58:58.967222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.756 [2024-10-01 15:58:58.967228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.756 [2024-10-01 15:58:58.967356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.756 [2024-10-01 15:58:58.967365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.756 [2024-10-01 15:58:58.977779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.756 [2024-10-01 15:58:58.977801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.756 [2024-10-01 15:58:58.978122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.756 [2024-10-01 15:58:58.978139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.756 [2024-10-01 15:58:58.978147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.756 [2024-10-01 15:58:58.978285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.756 [2024-10-01 15:58:58.978294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.756 [2024-10-01 15:58:58.978301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.756 [2024-10-01 15:58:58.978482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.756 [2024-10-01 15:58:58.978497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.756 [2024-10-01 15:58:58.978637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.756 [2024-10-01 15:58:58.978648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.756 [2024-10-01 15:58:58.978654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.756 [2024-10-01 15:58:58.978664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.756 [2024-10-01 15:58:58.978670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.756 [2024-10-01 15:58:58.978683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.756 [2024-10-01 15:58:58.978826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.756 [2024-10-01 15:58:58.978835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.756 11339.86 IOPS, 44.30 MiB/s [2024-10-01 15:58:58.989362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.756 [2024-10-01 15:58:58.989385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.757 [2024-10-01 15:58:58.989678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.757 [2024-10-01 15:58:58.989694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.757 [2024-10-01 15:58:58.989702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.757 [2024-10-01 15:58:58.989796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.757 [2024-10-01 15:58:58.989807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.757 [2024-10-01 15:58:58.989814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.757 [2024-10-01 15:58:58.989963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.757 [2024-10-01 15:58:58.989976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.757 [2024-10-01 15:58:58.989998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.757 [2024-10-01 15:58:58.990005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.757 [2024-10-01 15:58:58.990012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.757 [2024-10-01 15:58:58.990021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.757 [2024-10-01 15:58:58.990027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.757 [2024-10-01 15:58:58.990033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.757 [2024-10-01 15:58:58.990161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.757 [2024-10-01 15:58:58.990170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.757 [2024-10-01 15:58:59.000564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.757 [2024-10-01 15:58:59.000585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.757 [2024-10-01 15:58:59.000748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.757 [2024-10-01 15:58:59.000761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.757 [2024-10-01 15:58:59.000768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.757 [2024-10-01 15:58:59.000936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.757 [2024-10-01 15:58:59.000947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.757 [2024-10-01 15:58:59.000954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.757 [2024-10-01 15:58:59.000966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.757 [2024-10-01 15:58:59.000975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.757 [2024-10-01 15:58:59.000989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.757 [2024-10-01 15:58:59.000995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.757 [2024-10-01 15:58:59.001001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.757 [2024-10-01 15:58:59.001010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.757 [2024-10-01 15:58:59.001016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.757 [2024-10-01 15:58:59.001022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.757 [2024-10-01 15:58:59.001036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.757 [2024-10-01 15:58:59.001042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.757 [2024-10-01 15:58:59.012974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.757 [2024-10-01 15:58:59.012996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.757 [2024-10-01 15:58:59.013113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.757 [2024-10-01 15:58:59.013125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.757 [2024-10-01 15:58:59.013132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.757 [2024-10-01 15:58:59.013231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.757 [2024-10-01 15:58:59.013240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.757 [2024-10-01 15:58:59.013246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.757 [2024-10-01 15:58:59.013258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.757 [2024-10-01 15:58:59.013267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.757 [2024-10-01 15:58:59.013285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.757 [2024-10-01 15:58:59.013292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.757 [2024-10-01 15:58:59.013298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.757 [2024-10-01 15:58:59.013307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.757 [2024-10-01 15:58:59.013313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.757 [2024-10-01 15:58:59.013319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.757 [2024-10-01 15:58:59.013332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.757 [2024-10-01 15:58:59.013339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.757 [2024-10-01 15:58:59.024129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.757 [2024-10-01 15:58:59.024150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.757 [2024-10-01 15:58:59.024950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.757 [2024-10-01 15:58:59.024969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.757 [2024-10-01 15:58:59.024980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.757 [2024-10-01 15:58:59.025132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.757 [2024-10-01 15:58:59.025142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.757 [2024-10-01 15:58:59.025149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.757 [2024-10-01 15:58:59.025800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.757 [2024-10-01 15:58:59.025817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.757 [2024-10-01 15:58:59.026186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.757 [2024-10-01 15:58:59.026198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.757 [2024-10-01 15:58:59.026204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.757 [2024-10-01 15:58:59.026214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.757 [2024-10-01 15:58:59.026220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.757 [2024-10-01 15:58:59.026227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.757 [2024-10-01 15:58:59.026284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.757 [2024-10-01 15:58:59.026293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.757 [2024-10-01 15:58:59.034515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.757 [2024-10-01 15:58:59.034545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.757 [2024-10-01 15:58:59.034775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.757 [2024-10-01 15:58:59.034797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.757 [2024-10-01 15:58:59.034805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.757 [2024-10-01 15:58:59.034960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.757 [2024-10-01 15:58:59.034971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.757 [2024-10-01 15:58:59.034978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.757 [2024-10-01 15:58:59.034987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.757 [2024-10-01 15:58:59.035141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.757 [2024-10-01 15:58:59.035152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.757 [2024-10-01 15:58:59.035158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.757 [2024-10-01 15:58:59.035164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.757 [2024-10-01 15:58:59.035278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.757 [2024-10-01 15:58:59.035288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.757 [2024-10-01 15:58:59.035294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.757 [2024-10-01 15:58:59.035304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.757 [2024-10-01 15:58:59.035444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.757 [2024-10-01 15:58:59.045624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.757 [2024-10-01 15:58:59.045647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.757 [2024-10-01 15:58:59.045803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.757 [2024-10-01 15:58:59.045816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.757 [2024-10-01 15:58:59.045823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.757 [2024-10-01 15:58:59.046022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.757 [2024-10-01 15:58:59.046033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.757 [2024-10-01 15:58:59.046040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.757 [2024-10-01 15:58:59.046312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.757 [2024-10-01 15:58:59.046327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.757 [2024-10-01 15:58:59.046506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.757 [2024-10-01 15:58:59.046517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.757 [2024-10-01 15:58:59.046523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.757 [2024-10-01 15:58:59.046533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.757 [2024-10-01 15:58:59.046539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.757 [2024-10-01 15:58:59.046546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.757 [2024-10-01 15:58:59.046689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.757 [2024-10-01 15:58:59.046699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.757 [2024-10-01 15:58:59.057915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.757 [2024-10-01 15:58:59.057936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.758 [2024-10-01 15:58:59.058459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.758 [2024-10-01 15:58:59.058476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.758 [2024-10-01 15:58:59.058484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.758 [2024-10-01 15:58:59.058580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.758 [2024-10-01 15:58:59.058590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.758 [2024-10-01 15:58:59.058597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.758 [2024-10-01 15:58:59.058860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.758 [2024-10-01 15:58:59.058882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.758 [2024-10-01 15:58:59.059031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.758 [2024-10-01 15:58:59.059044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.758 [2024-10-01 15:58:59.059052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.758 [2024-10-01 15:58:59.059062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.758 [2024-10-01 15:58:59.059068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.758 [2024-10-01 15:58:59.059074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.758 [2024-10-01 15:58:59.059103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.758 [2024-10-01 15:58:59.059111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.758 [2024-10-01 15:58:59.068807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.758 [2024-10-01 15:58:59.068829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.758 [2024-10-01 15:58:59.069116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.758 [2024-10-01 15:58:59.069133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.758 [2024-10-01 15:58:59.069141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.758 [2024-10-01 15:58:59.069279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.758 [2024-10-01 15:58:59.069289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.758 [2024-10-01 15:58:59.069295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.758 [2024-10-01 15:58:59.069440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.758 [2024-10-01 15:58:59.069452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.758 [2024-10-01 15:58:59.069479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.758 [2024-10-01 15:58:59.069487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.758 [2024-10-01 15:58:59.069493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.758 [2024-10-01 15:58:59.069502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.758 [2024-10-01 15:58:59.069507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.758 [2024-10-01 15:58:59.069514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.758 [2024-10-01 15:58:59.069642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.758 [2024-10-01 15:58:59.069651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.758 [2024-10-01 15:58:59.080039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.758 [2024-10-01 15:58:59.080061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.758 [2024-10-01 15:58:59.080416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.758 [2024-10-01 15:58:59.080432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.758 [2024-10-01 15:58:59.080440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.758 [2024-10-01 15:58:59.080641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.758 [2024-10-01 15:58:59.080652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.758 [2024-10-01 15:58:59.080659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.758 [2024-10-01 15:58:59.080802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.758 [2024-10-01 15:58:59.080814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.758 [2024-10-01 15:58:59.080958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.758 [2024-10-01 15:58:59.080969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.758 [2024-10-01 15:58:59.080975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.758 [2024-10-01 15:58:59.080985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.758 [2024-10-01 15:58:59.080991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.758 [2024-10-01 15:58:59.080997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.758 [2024-10-01 15:58:59.081026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.758 [2024-10-01 15:58:59.081034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.758 [2024-10-01 15:58:59.090800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.758 [2024-10-01 15:58:59.090821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.758 [2024-10-01 15:58:59.090935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.758 [2024-10-01 15:58:59.090948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.758 [2024-10-01 15:58:59.090956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.758 [2024-10-01 15:58:59.091101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.758 [2024-10-01 15:58:59.091111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.758 [2024-10-01 15:58:59.091117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.758 [2024-10-01 15:58:59.091129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.758 [2024-10-01 15:58:59.091138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.758 [2024-10-01 15:58:59.091333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.758 [2024-10-01 15:58:59.091344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.758 [2024-10-01 15:58:59.091350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.758 [2024-10-01 15:58:59.091359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.758 [2024-10-01 15:58:59.091365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.758 [2024-10-01 15:58:59.091371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.758 [2024-10-01 15:58:59.091502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.758 [2024-10-01 15:58:59.091511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.758 [2024-10-01 15:58:59.101663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.758 [2024-10-01 15:58:59.101685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.758 [2024-10-01 15:58:59.101981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.758 [2024-10-01 15:58:59.101998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.758 [2024-10-01 15:58:59.102006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.758 [2024-10-01 15:58:59.102092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.758 [2024-10-01 15:58:59.102109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.758 [2024-10-01 15:58:59.102116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.758 [2024-10-01 15:58:59.102260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.758 [2024-10-01 15:58:59.102272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.758 [2024-10-01 15:58:59.102308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.758 [2024-10-01 15:58:59.102316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.758 [2024-10-01 15:58:59.102323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.758 [2024-10-01 15:58:59.102332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.758 [2024-10-01 15:58:59.102338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.758 [2024-10-01 15:58:59.102344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.758 [2024-10-01 15:58:59.102357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.758 [2024-10-01 15:58:59.102364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.758 [2024-10-01 15:58:59.112826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.758 [2024-10-01 15:58:59.112848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.758 [2024-10-01 15:58:59.113138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.758 [2024-10-01 15:58:59.113155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.758 [2024-10-01 15:58:59.113163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.758 [2024-10-01 15:58:59.113307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.758 [2024-10-01 15:58:59.113317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.758 [2024-10-01 15:58:59.113323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.759 [2024-10-01 15:58:59.113468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.759 [2024-10-01 15:58:59.113481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.759 [2024-10-01 15:58:59.113618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.759 [2024-10-01 15:58:59.113628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.759 [2024-10-01 15:58:59.113639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.759 [2024-10-01 15:58:59.113648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.759 [2024-10-01 15:58:59.113654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.759 [2024-10-01 15:58:59.113660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.759 [2024-10-01 15:58:59.113690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.759 [2024-10-01 15:58:59.113697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.759 [2024-10-01 15:58:59.123914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.759 [2024-10-01 15:58:59.123935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.759 [2024-10-01 15:58:59.124107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.759 [2024-10-01 15:58:59.124119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.759 [2024-10-01 15:58:59.124127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.759 [2024-10-01 15:58:59.124213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.759 [2024-10-01 15:58:59.124223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.759 [2024-10-01 15:58:59.124230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.759 [2024-10-01 15:58:59.124241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.759 [2024-10-01 15:58:59.124250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.759 [2024-10-01 15:58:59.124260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.759 [2024-10-01 15:58:59.124266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.759 [2024-10-01 15:58:59.124273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.759 [2024-10-01 15:58:59.124282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.759 [2024-10-01 15:58:59.124289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.759 [2024-10-01 15:58:59.124295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.759 [2024-10-01 15:58:59.124308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.759 [2024-10-01 15:58:59.124316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.759 [2024-10-01 15:58:59.134739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.759 [2024-10-01 15:58:59.134761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.759 [2024-10-01 15:58:59.134979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.759 [2024-10-01 15:58:59.134993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.759 [2024-10-01 15:58:59.135001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.759 [2024-10-01 15:58:59.135093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.759 [2024-10-01 15:58:59.135103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.759 [2024-10-01 15:58:59.135113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.759 [2024-10-01 15:58:59.135244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.759 [2024-10-01 15:58:59.135256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.759 [2024-10-01 15:58:59.135601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.759 [2024-10-01 15:58:59.135613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.759 [2024-10-01 15:58:59.135620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.759 [2024-10-01 15:58:59.135629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.759 [2024-10-01 15:58:59.135635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.759 [2024-10-01 15:58:59.135642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.759 [2024-10-01 15:58:59.135797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.759 [2024-10-01 15:58:59.135807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.759 [2024-10-01 15:58:59.146008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.759 [2024-10-01 15:58:59.146030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.759 [2024-10-01 15:58:59.146219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.759 [2024-10-01 15:58:59.146232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.759 [2024-10-01 15:58:59.146239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.759 [2024-10-01 15:58:59.146337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.759 [2024-10-01 15:58:59.146347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.759 [2024-10-01 15:58:59.146354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.759 [2024-10-01 15:58:59.146514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.759 [2024-10-01 15:58:59.146527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.759 [2024-10-01 15:58:59.146665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.759 [2024-10-01 15:58:59.146675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.759 [2024-10-01 15:58:59.146681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.759 [2024-10-01 15:58:59.146691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.759 [2024-10-01 15:58:59.146697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.759 [2024-10-01 15:58:59.146703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.759 [2024-10-01 15:58:59.146732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.759 [2024-10-01 15:58:59.146740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.759 [2024-10-01 15:58:59.156234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.759 [2024-10-01 15:58:59.156263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.759 [2024-10-01 15:58:59.156423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.759 [2024-10-01 15:58:59.156435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.759 [2024-10-01 15:58:59.156443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.759 [2024-10-01 15:58:59.156533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.759 [2024-10-01 15:58:59.156543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.759 [2024-10-01 15:58:59.156549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.759 [2024-10-01 15:58:59.156560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.759 [2024-10-01 15:58:59.156570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.759 [2024-10-01 15:58:59.156580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.759 [2024-10-01 15:58:59.156586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.759 [2024-10-01 15:58:59.156592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.759 [2024-10-01 15:58:59.156600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.759 [2024-10-01 15:58:59.156606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.759 [2024-10-01 15:58:59.156611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.759 [2024-10-01 15:58:59.156625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.759 [2024-10-01 15:58:59.156632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.759 [2024-10-01 15:58:59.169262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.759 [2024-10-01 15:58:59.169284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.759 [2024-10-01 15:58:59.169689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.759 [2024-10-01 15:58:59.169707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.759 [2024-10-01 15:58:59.169714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.759 [2024-10-01 15:58:59.169888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.759 [2024-10-01 15:58:59.169898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.759 [2024-10-01 15:58:59.169905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.759 [2024-10-01 15:58:59.170364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.759 [2024-10-01 15:58:59.170379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.759 [2024-10-01 15:58:59.170539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.759 [2024-10-01 15:58:59.170549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.759 [2024-10-01 15:58:59.170556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.759 [2024-10-01 15:58:59.170569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.759 [2024-10-01 15:58:59.170575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.759 [2024-10-01 15:58:59.170581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.759 [2024-10-01 15:58:59.170723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.759 [2024-10-01 15:58:59.170733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.759 [2024-10-01 15:58:59.180238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.759 [2024-10-01 15:58:59.180259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.759 [2024-10-01 15:58:59.180422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.759 [2024-10-01 15:58:59.180435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.759 [2024-10-01 15:58:59.180442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.759 [2024-10-01 15:58:59.180579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.759 [2024-10-01 15:58:59.180589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.759 [2024-10-01 15:58:59.180596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.760 [2024-10-01 15:58:59.180607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.760 [2024-10-01 15:58:59.180616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.760 [2024-10-01 15:58:59.180626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.760 [2024-10-01 15:58:59.180633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.760 [2024-10-01 15:58:59.180640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.760 [2024-10-01 15:58:59.180649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.760 [2024-10-01 15:58:59.180655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.760 [2024-10-01 15:58:59.180661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.760 [2024-10-01 15:58:59.180675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.760 [2024-10-01 15:58:59.180685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.760 [2024-10-01 15:58:59.191704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.760 [2024-10-01 15:58:59.191725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.760 [2024-10-01 15:58:59.192211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.760 [2024-10-01 15:58:59.192230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.760 [2024-10-01 15:58:59.192238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.760 [2024-10-01 15:58:59.192374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.760 [2024-10-01 15:58:59.192384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.760 [2024-10-01 15:58:59.192391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.760 [2024-10-01 15:58:59.192656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.760 [2024-10-01 15:58:59.192671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.760 [2024-10-01 15:58:59.192818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.760 [2024-10-01 15:58:59.192829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.760 [2024-10-01 15:58:59.192835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.760 [2024-10-01 15:58:59.192845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.760 [2024-10-01 15:58:59.192851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.760 [2024-10-01 15:58:59.192857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.760 [2024-10-01 15:58:59.192894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.760 [2024-10-01 15:58:59.192902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.760 [2024-10-01 15:58:59.203749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.760 [2024-10-01 15:58:59.203770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.760 [2024-10-01 15:58:59.204127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.760 [2024-10-01 15:58:59.204144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.760 [2024-10-01 15:58:59.204152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.760 [2024-10-01 15:58:59.204252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.760 [2024-10-01 15:58:59.204261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.760 [2024-10-01 15:58:59.204268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.760 [2024-10-01 15:58:59.204424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.760 [2024-10-01 15:58:59.204436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.760 [2024-10-01 15:58:59.204574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.760 [2024-10-01 15:58:59.204585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.760 [2024-10-01 15:58:59.204592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.760 [2024-10-01 15:58:59.204602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.760 [2024-10-01 15:58:59.204607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.760 [2024-10-01 15:58:59.204614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.760 [2024-10-01 15:58:59.204643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.760 [2024-10-01 15:58:59.204650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.760 [2024-10-01 15:58:59.214945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.760 [2024-10-01 15:58:59.214966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.760 [2024-10-01 15:58:59.215086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.760 [2024-10-01 15:58:59.215099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.760 [2024-10-01 15:58:59.215107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.760 [2024-10-01 15:58:59.215255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.760 [2024-10-01 15:58:59.215264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.760 [2024-10-01 15:58:59.215271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.760 [2024-10-01 15:58:59.215401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.760 [2024-10-01 15:58:59.215413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.760 [2024-10-01 15:58:59.215551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.760 [2024-10-01 15:58:59.215560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.760 [2024-10-01 15:58:59.215567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.760 [2024-10-01 15:58:59.215576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.760 [2024-10-01 15:58:59.215582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.760 [2024-10-01 15:58:59.215588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.760 [2024-10-01 15:58:59.215618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.760 [2024-10-01 15:58:59.215626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.760 [2024-10-01 15:58:59.225632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.760 [2024-10-01 15:58:59.225653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.760 [2024-10-01 15:58:59.225832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.760 [2024-10-01 15:58:59.225845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.760 [2024-10-01 15:58:59.225853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.760 [2024-10-01 15:58:59.226007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.760 [2024-10-01 15:58:59.226017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.760 [2024-10-01 15:58:59.226024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.760 [2024-10-01 15:58:59.226155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.760 [2024-10-01 15:58:59.226167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.760 [2024-10-01 15:58:59.226305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.760 [2024-10-01 15:58:59.226315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.760 [2024-10-01 15:58:59.226322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.760 [2024-10-01 15:58:59.226331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.760 [2024-10-01 15:58:59.226340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.760 [2024-10-01 15:58:59.226347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.760 [2024-10-01 15:58:59.226376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.760 [2024-10-01 15:58:59.226384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.760 [2024-10-01 15:58:59.237511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.760 [2024-10-01 15:58:59.237532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.760 [2024-10-01 15:58:59.237697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.760 [2024-10-01 15:58:59.237709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.760 [2024-10-01 15:58:59.237717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.760 [2024-10-01 15:58:59.237855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.760 [2024-10-01 15:58:59.237870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.760 [2024-10-01 15:58:59.237877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.760 [2024-10-01 15:58:59.237889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.760 [2024-10-01 15:58:59.237898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.760 [2024-10-01 15:58:59.237908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.760 [2024-10-01 15:58:59.237915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.760 [2024-10-01 15:58:59.237921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.760 [2024-10-01 15:58:59.237929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.760 [2024-10-01 15:58:59.237935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.760 [2024-10-01 15:58:59.237940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.760 [2024-10-01 15:58:59.237953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.760 [2024-10-01 15:58:59.237960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.760 [2024-10-01 15:58:59.249653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.760 [2024-10-01 15:58:59.249675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.760 [2024-10-01 15:58:59.249889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.760 [2024-10-01 15:58:59.249902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.760 [2024-10-01 15:58:59.249910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.760 [2024-10-01 15:58:59.250133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.760 [2024-10-01 15:58:59.250144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.760 [2024-10-01 15:58:59.250150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.760 [2024-10-01 15:58:59.250170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.760 [2024-10-01 15:58:59.250184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.760 [2024-10-01 15:58:59.250193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.760 [2024-10-01 15:58:59.250199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.760 [2024-10-01 15:58:59.250205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.760 [2024-10-01 15:58:59.250214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.760 [2024-10-01 15:58:59.250220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.760 [2024-10-01 15:58:59.250226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.761 [2024-10-01 15:58:59.250239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.761 [2024-10-01 15:58:59.250246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.761 [2024-10-01 15:58:59.261492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.761 [2024-10-01 15:58:59.261514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.761 [2024-10-01 15:58:59.261723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.761 [2024-10-01 15:58:59.261736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.761 [2024-10-01 15:58:59.261744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.761 [2024-10-01 15:58:59.261976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.761 [2024-10-01 15:58:59.261987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.761 [2024-10-01 15:58:59.261994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.761 [2024-10-01 15:58:59.262005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.761 [2024-10-01 15:58:59.262015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.761 [2024-10-01 15:58:59.262025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.761 [2024-10-01 15:58:59.262031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.761 [2024-10-01 15:58:59.262037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.761 [2024-10-01 15:58:59.262046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.761 [2024-10-01 15:58:59.262052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.761 [2024-10-01 15:58:59.262058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.761 [2024-10-01 15:58:59.262485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.761 [2024-10-01 15:58:59.262496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.761 [2024-10-01 15:58:59.273064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.761 [2024-10-01 15:58:59.273085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.761 [2024-10-01 15:58:59.273297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.761 [2024-10-01 15:58:59.273315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.761 [2024-10-01 15:58:59.273323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.761 [2024-10-01 15:58:59.273515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.761 [2024-10-01 15:58:59.273526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.761 [2024-10-01 15:58:59.273532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.761 [2024-10-01 15:58:59.273984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.761 [2024-10-01 15:58:59.273999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.761 [2024-10-01 15:58:59.274166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.761 [2024-10-01 15:58:59.274176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.761 [2024-10-01 15:58:59.274183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.761 [2024-10-01 15:58:59.274192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.761 [2024-10-01 15:58:59.274199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.761 [2024-10-01 15:58:59.274205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.761 [2024-10-01 15:58:59.274381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.761 [2024-10-01 15:58:59.274391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.761 [2024-10-01 15:58:59.283986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.761 [2024-10-01 15:58:59.284006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.761 [2024-10-01 15:58:59.284267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.761 [2024-10-01 15:58:59.284280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.761 [2024-10-01 15:58:59.284287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.761 [2024-10-01 15:58:59.284480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.761 [2024-10-01 15:58:59.284490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.761 [2024-10-01 15:58:59.284497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.761 [2024-10-01 15:58:59.284509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.761 [2024-10-01 15:58:59.284518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.761 [2024-10-01 15:58:59.284528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.761 [2024-10-01 15:58:59.284534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.761 [2024-10-01 15:58:59.284540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.761 [2024-10-01 15:58:59.284548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.761 [2024-10-01 15:58:59.284554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.761 [2024-10-01 15:58:59.284564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.761 [2024-10-01 15:58:59.284577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.761 [2024-10-01 15:58:59.284584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.761 [2024-10-01 15:58:59.296052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.761 [2024-10-01 15:58:59.296074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.761 [2024-10-01 15:58:59.296458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.761 [2024-10-01 15:58:59.296475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.761 [2024-10-01 15:58:59.296483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.761 [2024-10-01 15:58:59.296627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.761 [2024-10-01 15:58:59.296637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.761 [2024-10-01 15:58:59.296643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.761 [2024-10-01 15:58:59.296741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.761 [2024-10-01 15:58:59.296752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.761 [2024-10-01 15:58:59.297571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.761 [2024-10-01 15:58:59.297586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.761 [2024-10-01 15:58:59.297593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.761 [2024-10-01 15:58:59.297602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.761 [2024-10-01 15:58:59.297609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.761 [2024-10-01 15:58:59.297615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.761 [2024-10-01 15:58:59.298050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.761 [2024-10-01 15:58:59.298063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.761 [2024-10-01 15:58:59.306343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.761 [2024-10-01 15:58:59.306365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.761 [2024-10-01 15:58:59.306605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.761 [2024-10-01 15:58:59.306618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.761 [2024-10-01 15:58:59.306626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.761 [2024-10-01 15:58:59.306708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.761 [2024-10-01 15:58:59.306717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.761 [2024-10-01 15:58:59.306724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.761 [2024-10-01 15:58:59.306735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.761 [2024-10-01 15:58:59.306745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.761 [2024-10-01 15:58:59.306759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.761 [2024-10-01 15:58:59.306765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.761 [2024-10-01 15:58:59.306771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.761 [2024-10-01 15:58:59.306780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.761 [2024-10-01 15:58:59.306786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.761 [2024-10-01 15:58:59.306792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.761 [2024-10-01 15:58:59.306805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.761 [2024-10-01 15:58:59.306812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.761 [2024-10-01 15:58:59.317371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.761 [2024-10-01 15:58:59.317393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.761 [2024-10-01 15:58:59.317785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.761 [2024-10-01 15:58:59.317802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.761 [2024-10-01 15:58:59.317810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.761 [2024-10-01 15:58:59.317894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.761 [2024-10-01 15:58:59.317905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.761 [2024-10-01 15:58:59.317912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.761 [2024-10-01 15:58:59.318056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.761 [2024-10-01 15:58:59.318068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.761 [2024-10-01 15:58:59.318094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.761 [2024-10-01 15:58:59.318101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.761 [2024-10-01 15:58:59.318107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.761 [2024-10-01 15:58:59.318116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.761 [2024-10-01 15:58:59.318122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.761 [2024-10-01 15:58:59.318129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.761 [2024-10-01 15:58:59.318142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.761 [2024-10-01 15:58:59.318149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.761 [2024-10-01 15:58:59.328451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.761 [2024-10-01 15:58:59.328472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.761 [2024-10-01 15:58:59.328600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.761 [2024-10-01 15:58:59.328613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.761 [2024-10-01 15:58:59.328623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.761 [2024-10-01 15:58:59.328766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.761 [2024-10-01 15:58:59.328776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.762 [2024-10-01 15:58:59.328783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.762 [2024-10-01 15:58:59.328919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.762 [2024-10-01 15:58:59.328932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.762 [2024-10-01 15:58:59.328958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.762 [2024-10-01 15:58:59.328966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.762 [2024-10-01 15:58:59.328973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.762 [2024-10-01 15:58:59.328981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.762 [2024-10-01 15:58:59.328987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.762 [2024-10-01 15:58:59.328994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.762 [2024-10-01 15:58:59.329007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.762 [2024-10-01 15:58:59.329014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.762 [2024-10-01 15:58:59.339120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.762 [2024-10-01 15:58:59.339142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.762 [2024-10-01 15:58:59.339511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.762 [2024-10-01 15:58:59.339527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.762 [2024-10-01 15:58:59.339535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.762 [2024-10-01 15:58:59.339686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.762 [2024-10-01 15:58:59.339696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.762 [2024-10-01 15:58:59.339703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.762 [2024-10-01 15:58:59.339849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.762 [2024-10-01 15:58:59.339869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.762 [2024-10-01 15:58:59.339897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.762 [2024-10-01 15:58:59.339904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.762 [2024-10-01 15:58:59.339911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.762 [2024-10-01 15:58:59.339920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.762 [2024-10-01 15:58:59.339926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.762 [2024-10-01 15:58:59.339932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.762 [2024-10-01 15:58:59.339950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.762 [2024-10-01 15:58:59.339956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.762 [2024-10-01 15:58:59.349583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.762 [2024-10-01 15:58:59.349604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.762 [2024-10-01 15:58:59.349784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.762 [2024-10-01 15:58:59.349797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.762 [2024-10-01 15:58:59.349805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.762 [2024-10-01 15:58:59.349969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.762 [2024-10-01 15:58:59.349980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.762 [2024-10-01 15:58:59.349987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.762 [2024-10-01 15:58:59.350481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.762 [2024-10-01 15:58:59.350495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.762 [2024-10-01 15:58:59.350981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.762 [2024-10-01 15:58:59.350993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.762 [2024-10-01 15:58:59.351000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.762 [2024-10-01 15:58:59.351010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.762 [2024-10-01 15:58:59.351017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.762 [2024-10-01 15:58:59.351023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.762 [2024-10-01 15:58:59.351190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.762 [2024-10-01 15:58:59.351199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.762 [2024-10-01 15:58:59.360611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.762 [2024-10-01 15:58:59.360632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.762 [2024-10-01 15:58:59.360867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.762 [2024-10-01 15:58:59.360880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.762 [2024-10-01 15:58:59.360887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.762 [2024-10-01 15:58:59.361037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.762 [2024-10-01 15:58:59.361046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.762 [2024-10-01 15:58:59.361053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.762 [2024-10-01 15:58:59.362019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.762 [2024-10-01 15:58:59.362035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.762 [2024-10-01 15:58:59.362260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.762 [2024-10-01 15:58:59.362274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.762 [2024-10-01 15:58:59.362281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.762 [2024-10-01 15:58:59.362290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.762 [2024-10-01 15:58:59.362296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.762 [2024-10-01 15:58:59.362302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.762 [2024-10-01 15:58:59.362454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.762 [2024-10-01 15:58:59.362464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.762 [2024-10-01 15:58:59.372653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.762 [2024-10-01 15:58:59.372675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.762 [2024-10-01 15:58:59.373027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.762 [2024-10-01 15:58:59.373044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.762 [2024-10-01 15:58:59.373051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.762 [2024-10-01 15:58:59.373266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.762 [2024-10-01 15:58:59.373276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.762 [2024-10-01 15:58:59.373283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.762 [2024-10-01 15:58:59.373576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.762 [2024-10-01 15:58:59.373590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.762 [2024-10-01 15:58:59.373745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.762 [2024-10-01 15:58:59.373755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.762 [2024-10-01 15:58:59.373761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.762 [2024-10-01 15:58:59.373771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.762 [2024-10-01 15:58:59.373777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.762 [2024-10-01 15:58:59.373784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.762 [2024-10-01 15:58:59.373814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.762 [2024-10-01 15:58:59.373821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.762 [2024-10-01 15:58:59.384162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.762 [2024-10-01 15:58:59.384184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.762 [2024-10-01 15:58:59.384523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.762 [2024-10-01 15:58:59.384541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.762 [2024-10-01 15:58:59.384548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.762 [2024-10-01 15:58:59.384747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.762 [2024-10-01 15:58:59.384758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.762 [2024-10-01 15:58:59.384764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.762 [2024-10-01 15:58:59.385023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.762 [2024-10-01 15:58:59.385037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.762 [2024-10-01 15:58:59.385185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.762 [2024-10-01 15:58:59.385195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.762 [2024-10-01 15:58:59.385202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.762 [2024-10-01 15:58:59.385211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.762 [2024-10-01 15:58:59.385217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.762 [2024-10-01 15:58:59.385223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.762 [2024-10-01 15:58:59.385252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.762 [2024-10-01 15:58:59.385260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.762 [2024-10-01 15:58:59.395416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.762 [2024-10-01 15:58:59.395437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.762 [2024-10-01 15:58:59.395582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.762 [2024-10-01 15:58:59.395595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.762 [2024-10-01 15:58:59.395603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.762 [2024-10-01 15:58:59.395703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.762 [2024-10-01 15:58:59.395712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.763 [2024-10-01 15:58:59.395719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.763 [2024-10-01 15:58:59.395730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.763 [2024-10-01 15:58:59.395740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.763 [2024-10-01 15:58:59.395749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.763 [2024-10-01 15:58:59.395756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.763 [2024-10-01 15:58:59.395762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.763 [2024-10-01 15:58:59.395771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.763 [2024-10-01 15:58:59.395776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.763 [2024-10-01 15:58:59.395782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.763 [2024-10-01 15:58:59.395796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.763 [2024-10-01 15:58:59.395805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.763 [2024-10-01 15:58:59.406527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.763 [2024-10-01 15:58:59.406548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.763 [2024-10-01 15:58:59.406661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.763 [2024-10-01 15:58:59.406674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.763 [2024-10-01 15:58:59.406682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.763 [2024-10-01 15:58:59.406900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.763 [2024-10-01 15:58:59.406910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.763 [2024-10-01 15:58:59.406917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.763 [2024-10-01 15:58:59.406929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.763 [2024-10-01 15:58:59.406938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.763 [2024-10-01 15:58:59.406948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.763 [2024-10-01 15:58:59.406954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.763 [2024-10-01 15:58:59.406960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.763 [2024-10-01 15:58:59.406969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.763 [2024-10-01 15:58:59.406975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.763 [2024-10-01 15:58:59.406981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.763 [2024-10-01 15:58:59.406994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.763 [2024-10-01 15:58:59.407001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.763 [2024-10-01 15:58:59.416962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.763 [2024-10-01 15:58:59.416983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.763 [2024-10-01 15:58:59.417142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.763 [2024-10-01 15:58:59.417155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.763 [2024-10-01 15:58:59.417162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.763 [2024-10-01 15:58:59.417321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.763 [2024-10-01 15:58:59.417331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.763 [2024-10-01 15:58:59.417338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.763 [2024-10-01 15:58:59.417349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.763 [2024-10-01 15:58:59.417358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.763 [2024-10-01 15:58:59.417368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.763 [2024-10-01 15:58:59.417375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.763 [2024-10-01 15:58:59.417384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.763 [2024-10-01 15:58:59.417393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.763 [2024-10-01 15:58:59.417399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.763 [2024-10-01 15:58:59.417405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.763 [2024-10-01 15:58:59.417418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.763 [2024-10-01 15:58:59.417425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.763 [2024-10-01 15:58:59.428207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.763 [2024-10-01 15:58:59.428228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.763 [2024-10-01 15:58:59.428885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.763 [2024-10-01 15:58:59.428903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.763 [2024-10-01 15:58:59.428911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.763 [2024-10-01 15:58:59.429056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.763 [2024-10-01 15:58:59.429066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.763 [2024-10-01 15:58:59.429073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.763 [2024-10-01 15:58:59.429269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.763 [2024-10-01 15:58:59.429284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.763 [2024-10-01 15:58:59.429373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.763 [2024-10-01 15:58:59.429381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.763 [2024-10-01 15:58:59.429388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.763 [2024-10-01 15:58:59.429397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.763 [2024-10-01 15:58:59.429403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.763 [2024-10-01 15:58:59.429409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.763 [2024-10-01 15:58:59.430082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.763 [2024-10-01 15:58:59.430094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.763 [2024-10-01 15:58:59.438623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.763 [2024-10-01 15:58:59.438645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.763 [2024-10-01 15:58:59.438825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.763 [2024-10-01 15:58:59.438837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.763 [2024-10-01 15:58:59.438845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.763 [2024-10-01 15:58:59.439075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.763 [2024-10-01 15:58:59.439087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.763 [2024-10-01 15:58:59.439097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.763 [2024-10-01 15:58:59.439282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.763 [2024-10-01 15:58:59.439296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.763 [2024-10-01 15:58:59.439323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.763 [2024-10-01 15:58:59.439330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.763 [2024-10-01 15:58:59.439337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.763 [2024-10-01 15:58:59.439346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.763 [2024-10-01 15:58:59.439352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.764 [2024-10-01 15:58:59.439358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.764 [2024-10-01 15:58:59.439372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.764 [2024-10-01 15:58:59.439378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.764 [2024-10-01 15:58:59.449916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.764 [2024-10-01 15:58:59.449937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.764 [2024-10-01 15:58:59.450222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.764 [2024-10-01 15:58:59.450238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.764 [2024-10-01 15:58:59.450246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.764 [2024-10-01 15:58:59.450439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.764 [2024-10-01 15:58:59.450449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.764 [2024-10-01 15:58:59.450456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.764 [2024-10-01 15:58:59.450486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.764 [2024-10-01 15:58:59.450496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.764 [2024-10-01 15:58:59.450515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.764 [2024-10-01 15:58:59.450522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.764 [2024-10-01 15:58:59.450529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.764 [2024-10-01 15:58:59.450537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.764 [2024-10-01 15:58:59.450543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.764 [2024-10-01 15:58:59.450549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.764 [2024-10-01 15:58:59.450562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.764 [2024-10-01 15:58:59.450569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.764 [2024-10-01 15:58:59.460783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.764 [2024-10-01 15:58:59.460808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.764 [2024-10-01 15:58:59.460990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.764 [2024-10-01 15:58:59.461003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.764 [2024-10-01 15:58:59.461011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.764 [2024-10-01 15:58:59.461202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.764 [2024-10-01 15:58:59.461211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.764 [2024-10-01 15:58:59.461218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.764 [2024-10-01 15:58:59.461230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.764 [2024-10-01 15:58:59.461239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.764 [2024-10-01 15:58:59.461249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.764 [2024-10-01 15:58:59.461255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.764 [2024-10-01 15:58:59.461262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.764 [2024-10-01 15:58:59.461271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.764 [2024-10-01 15:58:59.461276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.764 [2024-10-01 15:58:59.461282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.764 [2024-10-01 15:58:59.461296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.764 [2024-10-01 15:58:59.461303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.764 [2024-10-01 15:58:59.470872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.764 [2024-10-01 15:58:59.470901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.764 [2024-10-01 15:58:59.471109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.764 [2024-10-01 15:58:59.471121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.764 [2024-10-01 15:58:59.471128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.764 [2024-10-01 15:58:59.471272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.764 [2024-10-01 15:58:59.471282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.764 [2024-10-01 15:58:59.471289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.764 [2024-10-01 15:58:59.471297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.764 [2024-10-01 15:58:59.471308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.764 [2024-10-01 15:58:59.471316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.764 [2024-10-01 15:58:59.471322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.764 [2024-10-01 15:58:59.471328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.764 [2024-10-01 15:58:59.471344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.764 [2024-10-01 15:58:59.471351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.764 [2024-10-01 15:58:59.471356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.764 [2024-10-01 15:58:59.471362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.764 [2024-10-01 15:58:59.471375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.764 [2024-10-01 15:58:59.482170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.764 [2024-10-01 15:58:59.482191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.764 [2024-10-01 15:58:59.482375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.764 [2024-10-01 15:58:59.482389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.764 [2024-10-01 15:58:59.482396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.764 [2024-10-01 15:58:59.482564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.764 [2024-10-01 15:58:59.482574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.764 [2024-10-01 15:58:59.482580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.764 [2024-10-01 15:58:59.482922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.764 [2024-10-01 15:58:59.482938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.764 [2024-10-01 15:58:59.483198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.764 [2024-10-01 15:58:59.483208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.764 [2024-10-01 15:58:59.483215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.764 [2024-10-01 15:58:59.483224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.764 [2024-10-01 15:58:59.483230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.765 [2024-10-01 15:58:59.483236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.765 [2024-10-01 15:58:59.483277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.765 [2024-10-01 15:58:59.483285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.765 [2024-10-01 15:58:59.493633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.765 [2024-10-01 15:58:59.493655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.765 [2024-10-01 15:58:59.494031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.765 [2024-10-01 15:58:59.494048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.765 [2024-10-01 15:58:59.494055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.765 [2024-10-01 15:58:59.494180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.765 [2024-10-01 15:58:59.494189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.765 [2024-10-01 15:58:59.494199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.765 [2024-10-01 15:58:59.494228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.765 [2024-10-01 15:58:59.494239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.765 [2024-10-01 15:58:59.494248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.765 [2024-10-01 15:58:59.494254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.765 [2024-10-01 15:58:59.494261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.765 [2024-10-01 15:58:59.494269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.765 [2024-10-01 15:58:59.494275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.765 [2024-10-01 15:58:59.494281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.765 [2024-10-01 15:58:59.494447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.765 [2024-10-01 15:58:59.494457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.765 [2024-10-01 15:58:59.503713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.765 [2024-10-01 15:58:59.503742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.765 [2024-10-01 15:58:59.503886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.765 [2024-10-01 15:58:59.503899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.765 [2024-10-01 15:58:59.503906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.765 [2024-10-01 15:58:59.504124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.765 [2024-10-01 15:58:59.504134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.765 [2024-10-01 15:58:59.504140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.765 [2024-10-01 15:58:59.504149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.765 [2024-10-01 15:58:59.505116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.765 [2024-10-01 15:58:59.505131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.765 [2024-10-01 15:58:59.505137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.765 [2024-10-01 15:58:59.505143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.765 [2024-10-01 15:58:59.505619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.765 [2024-10-01 15:58:59.505630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.765 [2024-10-01 15:58:59.505636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.765 [2024-10-01 15:58:59.505642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.765 [2024-10-01 15:58:59.505817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.765 [2024-10-01 15:58:59.515280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.765 [2024-10-01 15:58:59.515301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.765 [2024-10-01 15:58:59.515685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.765 [2024-10-01 15:58:59.515701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.765 [2024-10-01 15:58:59.515709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.765 [2024-10-01 15:58:59.515855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.765 [2024-10-01 15:58:59.515870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.765 [2024-10-01 15:58:59.515877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.765 [2024-10-01 15:58:59.516140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.765 [2024-10-01 15:58:59.516154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.765 [2024-10-01 15:58:59.516191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.765 [2024-10-01 15:58:59.516198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.765 [2024-10-01 15:58:59.516204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.765 [2024-10-01 15:58:59.516213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.765 [2024-10-01 15:58:59.516219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.765 [2024-10-01 15:58:59.516225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.765 [2024-10-01 15:58:59.516354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.765 [2024-10-01 15:58:59.516363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.765 [2024-10-01 15:58:59.526753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.765 [2024-10-01 15:58:59.526774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.765 [2024-10-01 15:58:59.527135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.765 [2024-10-01 15:58:59.527152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.765 [2024-10-01 15:58:59.527159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.765 [2024-10-01 15:58:59.527282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.765 [2024-10-01 15:58:59.527292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.765 [2024-10-01 15:58:59.527298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.765 [2024-10-01 15:58:59.527443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.765 [2024-10-01 15:58:59.527455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.765 [2024-10-01 15:58:59.527592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.765 [2024-10-01 15:58:59.527602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.765 [2024-10-01 15:58:59.527608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.765 [2024-10-01 15:58:59.527618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.765 [2024-10-01 15:58:59.527627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.765 [2024-10-01 15:58:59.527634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.766 [2024-10-01 15:58:59.527663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.766 [2024-10-01 15:58:59.527671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.766 [2024-10-01 15:58:59.538266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.766 [2024-10-01 15:58:59.538287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.766 [2024-10-01 15:58:59.538670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.766 [2024-10-01 15:58:59.538686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.766 [2024-10-01 15:58:59.538694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.766 [2024-10-01 15:58:59.538908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.766 [2024-10-01 15:58:59.538920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.766 [2024-10-01 15:58:59.538927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.766 [2024-10-01 15:58:59.539190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.766 [2024-10-01 15:58:59.539204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.766 [2024-10-01 15:58:59.539241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.766 [2024-10-01 15:58:59.539248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.766 [2024-10-01 15:58:59.539255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.766 [2024-10-01 15:58:59.539264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.766 [2024-10-01 15:58:59.539270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.766 [2024-10-01 15:58:59.539276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.766 [2024-10-01 15:58:59.539404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.766 [2024-10-01 15:58:59.539413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.766 [2024-10-01 15:58:59.549803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.766 [2024-10-01 15:58:59.549824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.766 [2024-10-01 15:58:59.550157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.766 [2024-10-01 15:58:59.550174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.766 [2024-10-01 15:58:59.550181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.766 [2024-10-01 15:58:59.550396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.766 [2024-10-01 15:58:59.550406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.766 [2024-10-01 15:58:59.550413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.766 [2024-10-01 15:58:59.550565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.766 [2024-10-01 15:58:59.550579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.766 [2024-10-01 15:58:59.550716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.766 [2024-10-01 15:58:59.550726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.766 [2024-10-01 15:58:59.550733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.766 [2024-10-01 15:58:59.550742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.766 [2024-10-01 15:58:59.550748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.766 [2024-10-01 15:58:59.550754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.766 [2024-10-01 15:58:59.550901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.766 [2024-10-01 15:58:59.550911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.766 [2024-10-01 15:58:59.561335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.766 [2024-10-01 15:58:59.561356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.766 [2024-10-01 15:58:59.561764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.766 [2024-10-01 15:58:59.561780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.766 [2024-10-01 15:58:59.561788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.766 [2024-10-01 15:58:59.561956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.766 [2024-10-01 15:58:59.561966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.766 [2024-10-01 15:58:59.561973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.766 [2024-10-01 15:58:59.562240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.766 [2024-10-01 15:58:59.562254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.766 [2024-10-01 15:58:59.562291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.766 [2024-10-01 15:58:59.562298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.766 [2024-10-01 15:58:59.562305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.766 [2024-10-01 15:58:59.562314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.766 [2024-10-01 15:58:59.562320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.766 [2024-10-01 15:58:59.562326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.766 [2024-10-01 15:58:59.562454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.766 [2024-10-01 15:58:59.562463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.766 [2024-10-01 15:58:59.572859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.766 [2024-10-01 15:58:59.572885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.766 [2024-10-01 15:58:59.573261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.766 [2024-10-01 15:58:59.573281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.766 [2024-10-01 15:58:59.573288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.766 [2024-10-01 15:58:59.573426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.766 [2024-10-01 15:58:59.573435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.766 [2024-10-01 15:58:59.573442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.766 [2024-10-01 15:58:59.573615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.766 [2024-10-01 15:58:59.573629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.766 [2024-10-01 15:58:59.573769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.766 [2024-10-01 15:58:59.573779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.766 [2024-10-01 15:58:59.573786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.766 [2024-10-01 15:58:59.573795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.766 [2024-10-01 15:58:59.573801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.766 [2024-10-01 15:58:59.573807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.766 [2024-10-01 15:58:59.573956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.766 [2024-10-01 15:58:59.573966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.766 [2024-10-01 15:58:59.584203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.766 [2024-10-01 15:58:59.584223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.766 [2024-10-01 15:58:59.584459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.766 [2024-10-01 15:58:59.584471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.766 [2024-10-01 15:58:59.584478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.767 [2024-10-01 15:58:59.584571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.767 [2024-10-01 15:58:59.584580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.767 [2024-10-01 15:58:59.584587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.767 [2024-10-01 15:58:59.584598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.767 [2024-10-01 15:58:59.584608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.767 [2024-10-01 15:58:59.584618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.767 [2024-10-01 15:58:59.584625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.767 [2024-10-01 15:58:59.584631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.767 [2024-10-01 15:58:59.584640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.767 [2024-10-01 15:58:59.584645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.767 [2024-10-01 15:58:59.584655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.767 [2024-10-01 15:58:59.584669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.767 [2024-10-01 15:58:59.584675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.767 [2024-10-01 15:58:59.595410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.767 [2024-10-01 15:58:59.595431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.767 [2024-10-01 15:58:59.595642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.767 [2024-10-01 15:58:59.595654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.767 [2024-10-01 15:58:59.595662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.767 [2024-10-01 15:58:59.595750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.767 [2024-10-01 15:58:59.595759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.767 [2024-10-01 15:58:59.595766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.767 [2024-10-01 15:58:59.595777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.767 [2024-10-01 15:58:59.595786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.767 [2024-10-01 15:58:59.595796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.767 [2024-10-01 15:58:59.595802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.767 [2024-10-01 15:58:59.595809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.767 [2024-10-01 15:58:59.595817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.767 [2024-10-01 15:58:59.595823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.767 [2024-10-01 15:58:59.595829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.767 [2024-10-01 15:58:59.595842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.767 [2024-10-01 15:58:59.595849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.767 [2024-10-01 15:58:59.605697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.767 [2024-10-01 15:58:59.605718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.767 [2024-10-01 15:58:59.605945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.767 [2024-10-01 15:58:59.605959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.767 [2024-10-01 15:58:59.605966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.767 [2024-10-01 15:58:59.606055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.767 [2024-10-01 15:58:59.606064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.767 [2024-10-01 15:58:59.606071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.767 [2024-10-01 15:58:59.606202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.767 [2024-10-01 15:58:59.606217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.767 [2024-10-01 15:58:59.606355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.767 [2024-10-01 15:58:59.606364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.767 [2024-10-01 15:58:59.606371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.767 [2024-10-01 15:58:59.606380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.767 [2024-10-01 15:58:59.606386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.767 [2024-10-01 15:58:59.606391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.767 [2024-10-01 15:58:59.606421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.767 [2024-10-01 15:58:59.606429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.767 [2024-10-01 15:58:59.618377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.767 [2024-10-01 15:58:59.618397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.767 [2024-10-01 15:58:59.618682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.767 [2024-10-01 15:58:59.618698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.767 [2024-10-01 15:58:59.618706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.767 [2024-10-01 15:58:59.618874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.767 [2024-10-01 15:58:59.618885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.767 [2024-10-01 15:58:59.618892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.767 [2024-10-01 15:58:59.619036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.767 [2024-10-01 15:58:59.619049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.768 [2024-10-01 15:58:59.619074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.768 [2024-10-01 15:58:59.619082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.768 [2024-10-01 15:58:59.619088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.768 [2024-10-01 15:58:59.619098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.768 [2024-10-01 15:58:59.619104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.768 [2024-10-01 15:58:59.619110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.768 [2024-10-01 15:58:59.619246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.768 [2024-10-01 15:58:59.619255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.768 [2024-10-01 15:58:59.629144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.768 [2024-10-01 15:58:59.629165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.768 [2024-10-01 15:58:59.629455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.768 [2024-10-01 15:58:59.629471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.768 [2024-10-01 15:58:59.629482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.768 [2024-10-01 15:58:59.629623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.768 [2024-10-01 15:58:59.629633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.768 [2024-10-01 15:58:59.629639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.768 [2024-10-01 15:58:59.629782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.768 [2024-10-01 15:58:59.629794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.768 [2024-10-01 15:58:59.629939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.768 [2024-10-01 15:58:59.629949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.768 [2024-10-01 15:58:59.629956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.768 [2024-10-01 15:58:59.629965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.768 [2024-10-01 15:58:59.629971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.768 [2024-10-01 15:58:59.629977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.768 [2024-10-01 15:58:59.630006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.768 [2024-10-01 15:58:59.630014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.768 [2024-10-01 15:58:59.640978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.768 [2024-10-01 15:58:59.640999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.768 [2024-10-01 15:58:59.641244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.768 [2024-10-01 15:58:59.641257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.768 [2024-10-01 15:58:59.641264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.768 [2024-10-01 15:58:59.641408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.768 [2024-10-01 15:58:59.641418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.768 [2024-10-01 15:58:59.641425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.768 [2024-10-01 15:58:59.641436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.768 [2024-10-01 15:58:59.641445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.768 [2024-10-01 15:58:59.641455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.768 [2024-10-01 15:58:59.641461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.768 [2024-10-01 15:58:59.641468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.768 [2024-10-01 15:58:59.641476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.768 [2024-10-01 15:58:59.641482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.768 [2024-10-01 15:58:59.641488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.768 [2024-10-01 15:58:59.641505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.768 [2024-10-01 15:58:59.641511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.768 [2024-10-01 15:58:59.652557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.768 [2024-10-01 15:58:59.652580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.768 [2024-10-01 15:58:59.652845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.768 [2024-10-01 15:58:59.652859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.768 [2024-10-01 15:58:59.652874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.768 [2024-10-01 15:58:59.653091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.768 [2024-10-01 15:58:59.653102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.768 [2024-10-01 15:58:59.653108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.768 [2024-10-01 15:58:59.653120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.768 [2024-10-01 15:58:59.653130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.768 [2024-10-01 15:58:59.653147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.768 [2024-10-01 15:58:59.653154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.768 [2024-10-01 15:58:59.653161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.768 [2024-10-01 15:58:59.653169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.768 [2024-10-01 15:58:59.653175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.768 [2024-10-01 15:58:59.653181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.768 [2024-10-01 15:58:59.653195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.768 [2024-10-01 15:58:59.653202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.768 [2024-10-01 15:58:59.664272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.768 [2024-10-01 15:58:59.664292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.768 [2024-10-01 15:58:59.664529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.768 [2024-10-01 15:58:59.664541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.768 [2024-10-01 15:58:59.664549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.768 [2024-10-01 15:58:59.664646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.768 [2024-10-01 15:58:59.664656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.768 [2024-10-01 15:58:59.664663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.768 [2024-10-01 15:58:59.664676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.768 [2024-10-01 15:58:59.664685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.768 [2024-10-01 15:58:59.664699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.768 [2024-10-01 15:58:59.664705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.768 [2024-10-01 15:58:59.664712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.768 [2024-10-01 15:58:59.664720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.768 [2024-10-01 15:58:59.664727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.768 [2024-10-01 15:58:59.664733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.768 [2024-10-01 15:58:59.664747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.768 [2024-10-01 15:58:59.664753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.768 [2024-10-01 15:58:59.676358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.768 [2024-10-01 15:58:59.676380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.768 [2024-10-01 15:58:59.676637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.768 [2024-10-01 15:58:59.676655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.768 [2024-10-01 15:58:59.676662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.769 [2024-10-01 15:58:59.676824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.769 [2024-10-01 15:58:59.676837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.769 [2024-10-01 15:58:59.676845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.769 [2024-10-01 15:58:59.677405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.769 [2024-10-01 15:58:59.677421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.769 [2024-10-01 15:58:59.677723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.769 [2024-10-01 15:58:59.677734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.769 [2024-10-01 15:58:59.677740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.769 [2024-10-01 15:58:59.677750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.769 [2024-10-01 15:58:59.677756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.769 [2024-10-01 15:58:59.677762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.769 [2024-10-01 15:58:59.677923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.769 [2024-10-01 15:58:59.677933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.769 [2024-10-01 15:58:59.686483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.769 [2024-10-01 15:58:59.686504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.769 [2024-10-01 15:58:59.686662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.769 [2024-10-01 15:58:59.686675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.769 [2024-10-01 15:58:59.686682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.769 [2024-10-01 15:58:59.686781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.769 [2024-10-01 15:58:59.686791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.769 [2024-10-01 15:58:59.686797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.769 [2024-10-01 15:58:59.686809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.769 [2024-10-01 15:58:59.686818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.769 [2024-10-01 15:58:59.686828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.769 [2024-10-01 15:58:59.686833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.769 [2024-10-01 15:58:59.686840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.769 [2024-10-01 15:58:59.686848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.769 [2024-10-01 15:58:59.686854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.769 [2024-10-01 15:58:59.686860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.769 [2024-10-01 15:58:59.686880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.769 [2024-10-01 15:58:59.686887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.769 [2024-10-01 15:58:59.696967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.769 [2024-10-01 15:58:59.696988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.769 [2024-10-01 15:58:59.697200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.769 [2024-10-01 15:58:59.697213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.769 [2024-10-01 15:58:59.697221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.769 [2024-10-01 15:58:59.697416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.769 [2024-10-01 15:58:59.697426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.769 [2024-10-01 15:58:59.697433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.769 [2024-10-01 15:58:59.698053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.769 [2024-10-01 15:58:59.698071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.769 [2024-10-01 15:58:59.698601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.769 [2024-10-01 15:58:59.698613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.769 [2024-10-01 15:58:59.698620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.769 [2024-10-01 15:58:59.698629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.769 [2024-10-01 15:58:59.698635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.769 [2024-10-01 15:58:59.698642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.769 [2024-10-01 15:58:59.698936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.769 [2024-10-01 15:58:59.698951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.769 [2024-10-01 15:58:59.709600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.769 [2024-10-01 15:58:59.709620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.769 [2024-10-01 15:58:59.710010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.769 [2024-10-01 15:58:59.710027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.769 [2024-10-01 15:58:59.710034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.769 [2024-10-01 15:58:59.710256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.769 [2024-10-01 15:58:59.710266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.769 [2024-10-01 15:58:59.710273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.769 [2024-10-01 15:58:59.710385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.769 [2024-10-01 15:58:59.710398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.769 [2024-10-01 15:58:59.710517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.769 [2024-10-01 15:58:59.710526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.769 [2024-10-01 15:58:59.710532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.769 [2024-10-01 15:58:59.710542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.769 [2024-10-01 15:58:59.710548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.769 [2024-10-01 15:58:59.710554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.769 [2024-10-01 15:58:59.710696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.769 [2024-10-01 15:58:59.710705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.769 [2024-10-01 15:58:59.720450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.769 [2024-10-01 15:58:59.720472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.769 [2024-10-01 15:58:59.721007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.769 [2024-10-01 15:58:59.721026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.769 [2024-10-01 15:58:59.721034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.769 [2024-10-01 15:58:59.721162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.769 [2024-10-01 15:58:59.721171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.769 [2024-10-01 15:58:59.721178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.769 [2024-10-01 15:58:59.721340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.770 [2024-10-01 15:58:59.721353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.770 [2024-10-01 15:58:59.721379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.770 [2024-10-01 15:58:59.721386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.770 [2024-10-01 15:58:59.721396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.770 [2024-10-01 15:58:59.721405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.770 [2024-10-01 15:58:59.721411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.770 [2024-10-01 15:58:59.721417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.770 [2024-10-01 15:58:59.721431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.770 [2024-10-01 15:58:59.721438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.770 [2024-10-01 15:58:59.730531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.770 [2024-10-01 15:58:59.730671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.770 [2024-10-01 15:58:59.730929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.770 [2024-10-01 15:58:59.730945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.770 [2024-10-01 15:58:59.730953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.770 [2024-10-01 15:58:59.731301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.770 [2024-10-01 15:58:59.731315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.770 [2024-10-01 15:58:59.731323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.770 [2024-10-01 15:58:59.731332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.770 [2024-10-01 15:58:59.731716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.770 [2024-10-01 15:58:59.731729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.770 [2024-10-01 15:58:59.731735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.770 [2024-10-01 15:58:59.731742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.770 [2024-10-01 15:58:59.731926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.770 [2024-10-01 15:58:59.731937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.770 [2024-10-01 15:58:59.731943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.770 [2024-10-01 15:58:59.731949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.770 [2024-10-01 15:58:59.732091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.770 [2024-10-01 15:58:59.742115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.770 [2024-10-01 15:58:59.742136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.770 [2024-10-01 15:58:59.742347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.770 [2024-10-01 15:58:59.742360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.770 [2024-10-01 15:58:59.742367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.770 [2024-10-01 15:58:59.742458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.770 [2024-10-01 15:58:59.742471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.770 [2024-10-01 15:58:59.742478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.770 [2024-10-01 15:58:59.742826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.770 [2024-10-01 15:58:59.742840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.770 [2024-10-01 15:58:59.743005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.770 [2024-10-01 15:58:59.743016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.770 [2024-10-01 15:58:59.743022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.770 [2024-10-01 15:58:59.743032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.770 [2024-10-01 15:58:59.743038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.770 [2024-10-01 15:58:59.743045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.770 [2024-10-01 15:58:59.743247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.770 [2024-10-01 15:58:59.743258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.770 [2024-10-01 15:58:59.753493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.770 [2024-10-01 15:58:59.753515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.770 [2024-10-01 15:58:59.753708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.770 [2024-10-01 15:58:59.753721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.770 [2024-10-01 15:58:59.753730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.770 [2024-10-01 15:58:59.753874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.770 [2024-10-01 15:58:59.753886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.770 [2024-10-01 15:58:59.753893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.770 [2024-10-01 15:58:59.753905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.770 [2024-10-01 15:58:59.753914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.770 [2024-10-01 15:58:59.754362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.770 [2024-10-01 15:58:59.754373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.770 [2024-10-01 15:58:59.754379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.770 [2024-10-01 15:58:59.754389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.770 [2024-10-01 15:58:59.754396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.770 [2024-10-01 15:58:59.754403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.770 [2024-10-01 15:58:59.754575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.770 [2024-10-01 15:58:59.754585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.770 [2024-10-01 15:58:59.764402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.770 [2024-10-01 15:58:59.764423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.770 [2024-10-01 15:58:59.764680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.770 [2024-10-01 15:58:59.764694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.770 [2024-10-01 15:58:59.764701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.770 [2024-10-01 15:58:59.764851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.771 [2024-10-01 15:58:59.764861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.771 [2024-10-01 15:58:59.764875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.771 [2024-10-01 15:58:59.764886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.771 [2024-10-01 15:58:59.764895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.771 [2024-10-01 15:58:59.764904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.771 [2024-10-01 15:58:59.764912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.771 [2024-10-01 15:58:59.764919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.771 [2024-10-01 15:58:59.764928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.771 [2024-10-01 15:58:59.764934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.771 [2024-10-01 15:58:59.764940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.771 [2024-10-01 15:58:59.764954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.771 [2024-10-01 15:58:59.764962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.771 [2024-10-01 15:58:59.776054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.771 [2024-10-01 15:58:59.776076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.771 [2024-10-01 15:58:59.776242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.771 [2024-10-01 15:58:59.776256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.771 [2024-10-01 15:58:59.776265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.771 [2024-10-01 15:58:59.776484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.771 [2024-10-01 15:58:59.776495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.771 [2024-10-01 15:58:59.776502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.771 [2024-10-01 15:58:59.776514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.771 [2024-10-01 15:58:59.776523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.771 [2024-10-01 15:58:59.776533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.771 [2024-10-01 15:58:59.776539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.771 [2024-10-01 15:58:59.776549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.771 [2024-10-01 15:58:59.776557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.771 [2024-10-01 15:58:59.776563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.771 [2024-10-01 15:58:59.776570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.771 [2024-10-01 15:58:59.776584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.771 [2024-10-01 15:58:59.776590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.771 [2024-10-01 15:58:59.787494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.771 [2024-10-01 15:58:59.787516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.771 [2024-10-01 15:58:59.787776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.771 [2024-10-01 15:58:59.787791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.771 [2024-10-01 15:58:59.787799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.771 [2024-10-01 15:58:59.787871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.771 [2024-10-01 15:58:59.787882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.771 [2024-10-01 15:58:59.787889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.771 [2024-10-01 15:58:59.787900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.771 [2024-10-01 15:58:59.787910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.771 [2024-10-01 15:58:59.787920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.771 [2024-10-01 15:58:59.787927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.771 [2024-10-01 15:58:59.787933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.771 [2024-10-01 15:58:59.787942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.771 [2024-10-01 15:58:59.787949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.771 [2024-10-01 15:58:59.787956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.771 [2024-10-01 15:58:59.787969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.771 [2024-10-01 15:58:59.787975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.771 [2024-10-01 15:58:59.798922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.771 [2024-10-01 15:58:59.798944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.771 [2024-10-01 15:58:59.799235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.771 [2024-10-01 15:58:59.799251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.771 [2024-10-01 15:58:59.799259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.771 [2024-10-01 15:58:59.799454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.771 [2024-10-01 15:58:59.799466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.771 [2024-10-01 15:58:59.799476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.771 [2024-10-01 15:58:59.799943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.771 [2024-10-01 15:58:59.799959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.771 [2024-10-01 15:58:59.800120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.771 [2024-10-01 15:58:59.800131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.771 [2024-10-01 15:58:59.800139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.771 [2024-10-01 15:58:59.800149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.771 [2024-10-01 15:58:59.800155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.771 [2024-10-01 15:58:59.800162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.771 [2024-10-01 15:58:59.800304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.771 [2024-10-01 15:58:59.800314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.771 [2024-10-01 15:58:59.809370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.771 [2024-10-01 15:58:59.809391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.771 [2024-10-01 15:58:59.809595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.771 [2024-10-01 15:58:59.809609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964410 with addr=10.0.0.2, port=4420 00:24:57.771 [2024-10-01 15:58:59.809617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964410 is same with the state(6) to be set 00:24:57.771 [2024-10-01 15:58:59.809751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.772 [2024-10-01 15:58:59.809761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.772 [2024-10-01 15:58:59.809768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.772 [2024-10-01 15:58:59.809779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964410 (9): Bad file descriptor 00:24:57.772 [2024-10-01 15:58:59.809789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.772 [2024-10-01 15:58:59.809800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.772 [2024-10-01 15:58:59.809806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.772 [2024-10-01 15:58:59.809813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.772 [2024-10-01 15:58:59.809822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.772 [2024-10-01 15:58:59.809828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.772 [2024-10-01 15:58:59.809835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.772 [2024-10-01 15:58:59.809849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.772 [2024-10-01 15:58:59.809856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.772 [2024-10-01 15:58:59.822059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.772 [2024-10-01 15:58:59.822082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.772 [2024-10-01 15:58:59.822323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.772 [2024-10-01 15:58:59.822336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.772 [2024-10-01 15:58:59.822344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.772 [2024-10-01 15:58:59.822947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.772 [2024-10-01 15:58:59.823134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.772 [2024-10-01 15:58:59.823145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.772 [2024-10-01 15:58:59.823152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.772 [2024-10-01 15:58:59.823299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.772 [2024-10-01 15:58:59.833483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.772 [2024-10-01 15:58:59.833744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.772 [2024-10-01 15:58:59.833760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.772 [2024-10-01 15:58:59.833768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.772 [2024-10-01 15:58:59.833791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.772 [2024-10-01 15:58:59.833806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.772 [2024-10-01 15:58:59.833813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.772 [2024-10-01 15:58:59.833819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.772 [2024-10-01 15:58:59.834274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.772 [2024-10-01 15:58:59.844974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.772 [2024-10-01 15:58:59.845210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.772 [2024-10-01 15:58:59.845227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.772 [2024-10-01 15:58:59.845236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.772 [2024-10-01 15:58:59.845252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.772 [2024-10-01 15:58:59.845266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.772 [2024-10-01 15:58:59.845273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.772 [2024-10-01 15:58:59.845279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.772 [2024-10-01 15:58:59.845295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.772 [2024-10-01 15:58:59.855987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.772 [2024-10-01 15:58:59.856121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.772 [2024-10-01 15:58:59.856136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.772 [2024-10-01 15:58:59.856143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.772 [2024-10-01 15:58:59.856165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.772 [2024-10-01 15:58:59.856179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.772 [2024-10-01 15:58:59.856186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.772 [2024-10-01 15:58:59.856192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.772 [2024-10-01 15:58:59.856208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.772 [2024-10-01 15:58:59.868301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.772 [2024-10-01 15:58:59.868647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.772 [2024-10-01 15:58:59.868665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.772 [2024-10-01 15:58:59.868673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.772 [2024-10-01 15:58:59.868819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.772 [2024-10-01 15:58:59.868973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.772 [2024-10-01 15:58:59.868984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.772 [2024-10-01 15:58:59.868991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.772 [2024-10-01 15:58:59.869025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.772 [2024-10-01 15:58:59.879880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.772 [2024-10-01 15:58:59.880058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.772 [2024-10-01 15:58:59.880073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.772 [2024-10-01 15:58:59.880081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.772 [2024-10-01 15:58:59.880122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.772 [2024-10-01 15:58:59.880141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.773 [2024-10-01 15:58:59.880147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.773 [2024-10-01 15:58:59.880154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.773 [2024-10-01 15:58:59.880182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.773 [2024-10-01 15:58:59.892308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.773 [2024-10-01 15:58:59.892885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.773 [2024-10-01 15:58:59.892906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.773 [2024-10-01 15:58:59.892914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.773 [2024-10-01 15:58:59.893184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.773 [2024-10-01 15:58:59.893282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.773 [2024-10-01 15:58:59.893293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.773 [2024-10-01 15:58:59.893303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.773 [2024-10-01 15:58:59.893418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.773 [2024-10-01 15:58:59.905520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.773 [2024-10-01 15:58:59.905833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.773 [2024-10-01 15:58:59.905851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.773 [2024-10-01 15:58:59.905858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.773 [2024-10-01 15:58:59.905897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.773 [2024-10-01 15:58:59.905913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.773 [2024-10-01 15:58:59.905920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.773 [2024-10-01 15:58:59.905926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.773 [2024-10-01 15:58:59.905943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.773 [2024-10-01 15:58:59.916007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.773 [2024-10-01 15:58:59.916234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.773 [2024-10-01 15:58:59.916249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.773 [2024-10-01 15:58:59.916257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.773 [2024-10-01 15:58:59.916272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.773 [2024-10-01 15:58:59.916287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.773 [2024-10-01 15:58:59.916294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.773 [2024-10-01 15:58:59.916300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.773 [2024-10-01 15:58:59.916316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.773 [2024-10-01 15:58:59.928626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.773 [2024-10-01 15:58:59.928821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.773 [2024-10-01 15:58:59.928845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.773 [2024-10-01 15:58:59.928852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.773 [2024-10-01 15:58:59.929141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.773 [2024-10-01 15:58:59.929298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.773 [2024-10-01 15:58:59.929309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.773 [2024-10-01 15:58:59.929315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.773 [2024-10-01 15:58:59.929350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.773 [2024-10-01 15:58:59.939619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.773 [2024-10-01 15:58:59.939745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.773 [2024-10-01 15:58:59.939762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.773 [2024-10-01 15:58:59.939770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.773 [2024-10-01 15:58:59.939783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.773 [2024-10-01 15:58:59.939794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.773 [2024-10-01 15:58:59.939800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.773 [2024-10-01 15:58:59.939806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.773 [2024-10-01 15:58:59.939819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.773 [2024-10-01 15:58:59.940952] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.773 [2024-10-01 15:58:59.950618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.773 [2024-10-01 15:58:59.950867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.773 [2024-10-01 15:58:59.950883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.773 [2024-10-01 15:58:59.950891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.773 [2024-10-01 15:58:59.950904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.773 [2024-10-01 15:58:59.950915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.773 [2024-10-01 15:58:59.950921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.773 [2024-10-01 15:58:59.950928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.773 [2024-10-01 15:58:59.950940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.773 [2024-10-01 15:58:59.962972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.773 [2024-10-01 15:58:59.963338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.773 [2024-10-01 15:58:59.963356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.773 [2024-10-01 15:58:59.963363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.773 [2024-10-01 15:58:59.963505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.773 [2024-10-01 15:58:59.963534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.773 [2024-10-01 15:58:59.963541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.773 [2024-10-01 15:58:59.963548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.773 [2024-10-01 15:58:59.963562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.773 [2024-10-01 15:58:59.973972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.773 [2024-10-01 15:58:59.974146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.773 [2024-10-01 15:58:59.974162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.773 [2024-10-01 15:58:59.974170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.773 [2024-10-01 15:58:59.974305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.773 [2024-10-01 15:58:59.974335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.773 [2024-10-01 15:58:59.974342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.773 [2024-10-01 15:58:59.974349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.773 [2024-10-01 15:58:59.974363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.773 [2024-10-01 15:58:59.984435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.773 [2024-10-01 15:58:59.984654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.774 [2024-10-01 15:58:59.984669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.774 [2024-10-01 15:58:59.984677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.774 [2024-10-01 15:58:59.984689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.774 [2024-10-01 15:58:59.984699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.774 [2024-10-01 15:58:59.984705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.774 [2024-10-01 15:58:59.984711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.774 [2024-10-01 15:58:59.984725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.774 11362.88 IOPS, 44.39 MiB/s [2024-10-01 15:58:59.995695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.774 [2024-10-01 15:58:59.995925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.774 [2024-10-01 15:58:59.995942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.774 [2024-10-01 15:58:59.995950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.774 [2024-10-01 15:58:59.995963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.774 [2024-10-01 15:58:59.995974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.774 [2024-10-01 15:58:59.995980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.774 [2024-10-01 15:58:59.995986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.774 [2024-10-01 15:58:59.996000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.774 [2024-10-01 15:59:00.009317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.774 [2024-10-01 15:59:00.009578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.774 [2024-10-01 15:59:00.009596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.774 [2024-10-01 15:59:00.009607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.774 [2024-10-01 15:59:00.009622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.774 [2024-10-01 15:59:00.009635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.774 [2024-10-01 15:59:00.009642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.774 [2024-10-01 15:59:00.009654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.774 [2024-10-01 15:59:00.009670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.774 [2024-10-01 15:59:00.020789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.774 [2024-10-01 15:59:00.021042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.774 [2024-10-01 15:59:00.021058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.774 [2024-10-01 15:59:00.021066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.774 [2024-10-01 15:59:00.021079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.774 [2024-10-01 15:59:00.021090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.774 [2024-10-01 15:59:00.021097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.774 [2024-10-01 15:59:00.021103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.774 [2024-10-01 15:59:00.021117] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.774 [2024-10-01 15:59:00.034387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.774 [2024-10-01 15:59:00.034817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.774 [2024-10-01 15:59:00.034835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.774 [2024-10-01 15:59:00.034844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.774 [2024-10-01 15:59:00.035344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.774 [2024-10-01 15:59:00.035798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.774 [2024-10-01 15:59:00.035810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.774 [2024-10-01 15:59:00.035818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.774 [2024-10-01 15:59:00.035986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.774 [2024-10-01 15:59:00.046637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.774 [2024-10-01 15:59:00.047007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.774 [2024-10-01 15:59:00.047026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.774 [2024-10-01 15:59:00.047034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.774 [2024-10-01 15:59:00.047210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.774 [2024-10-01 15:59:00.047355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.774 [2024-10-01 15:59:00.047365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.774 [2024-10-01 15:59:00.047372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.774 [2024-10-01 15:59:00.047405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.774 [2024-10-01 15:59:00.056704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.774 [2024-10-01 15:59:00.056980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.774 [2024-10-01 15:59:00.057000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.774 [2024-10-01 15:59:00.057008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.774 [2024-10-01 15:59:00.057978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.774 [2024-10-01 15:59:00.058294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.774 [2024-10-01 15:59:00.058305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.774 [2024-10-01 15:59:00.058312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.774 [2024-10-01 15:59:00.058617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.774 [2024-10-01 15:59:00.070549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.774 [2024-10-01 15:59:00.070926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.774 [2024-10-01 15:59:00.070945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.774 [2024-10-01 15:59:00.070953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.774 [2024-10-01 15:59:00.071098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.774 [2024-10-01 15:59:00.071128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.774 [2024-10-01 15:59:00.071136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.774 [2024-10-01 15:59:00.071143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.774 [2024-10-01 15:59:00.071157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.774 [2024-10-01 15:59:00.081724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.774 [2024-10-01 15:59:00.081973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.774 [2024-10-01 15:59:00.081991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.774 [2024-10-01 15:59:00.081999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.774 [2024-10-01 15:59:00.082142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.774 [2024-10-01 15:59:00.082172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.774 [2024-10-01 15:59:00.082179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.774 [2024-10-01 15:59:00.082186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.774 [2024-10-01 15:59:00.082200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.775 [2024-10-01 15:59:00.093269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.775 [2024-10-01 15:59:00.093510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.775 [2024-10-01 15:59:00.093525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.775 [2024-10-01 15:59:00.093533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.775 [2024-10-01 15:59:00.093545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.775 [2024-10-01 15:59:00.093559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.775 [2024-10-01 15:59:00.093566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.775 [2024-10-01 15:59:00.093572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.775 [2024-10-01 15:59:00.093585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.775 [2024-10-01 15:59:00.105933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.775 [2024-10-01 15:59:00.106120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.775 [2024-10-01 15:59:00.106135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.775 [2024-10-01 15:59:00.106143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.775 [2024-10-01 15:59:00.106155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.775 [2024-10-01 15:59:00.106166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.775 [2024-10-01 15:59:00.106172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.775 [2024-10-01 15:59:00.106179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.775 [2024-10-01 15:59:00.106191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.775 [2024-10-01 15:59:00.117094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.775 [2024-10-01 15:59:00.117307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.775 [2024-10-01 15:59:00.117322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.775 [2024-10-01 15:59:00.117330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.775 [2024-10-01 15:59:00.117342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.775 [2024-10-01 15:59:00.117354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.775 [2024-10-01 15:59:00.117360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.775 [2024-10-01 15:59:00.117366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.775 [2024-10-01 15:59:00.117379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.775 [2024-10-01 15:59:00.128798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.775 [2024-10-01 15:59:00.129020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.775 [2024-10-01 15:59:00.129037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.775 [2024-10-01 15:59:00.129044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.775 [2024-10-01 15:59:00.129057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.775 [2024-10-01 15:59:00.129068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.775 [2024-10-01 15:59:00.129075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.775 [2024-10-01 15:59:00.129081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.775 [2024-10-01 15:59:00.129097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.775 [2024-10-01 15:59:00.141468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.775 [2024-10-01 15:59:00.141789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.775 [2024-10-01 15:59:00.141807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.775 [2024-10-01 15:59:00.141814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.775 [2024-10-01 15:59:00.141988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.775 [2024-10-01 15:59:00.142019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.775 [2024-10-01 15:59:00.142027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.775 [2024-10-01 15:59:00.142034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.775 [2024-10-01 15:59:00.142047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.775 [2024-10-01 15:59:00.152114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.775 [2024-10-01 15:59:00.152284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.775 [2024-10-01 15:59:00.152298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.775 [2024-10-01 15:59:00.152305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.775 [2024-10-01 15:59:00.152317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.775 [2024-10-01 15:59:00.152328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.775 [2024-10-01 15:59:00.152335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.775 [2024-10-01 15:59:00.152341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.775 [2024-10-01 15:59:00.152354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.775 [2024-10-01 15:59:00.164640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.775 [2024-10-01 15:59:00.164881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.775 [2024-10-01 15:59:00.164897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.775 [2024-10-01 15:59:00.164905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.775 [2024-10-01 15:59:00.164917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.775 [2024-10-01 15:59:00.164928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.775 [2024-10-01 15:59:00.164934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.775 [2024-10-01 15:59:00.164941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.775 [2024-10-01 15:59:00.164954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.775 [2024-10-01 15:59:00.176587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.775 [2024-10-01 15:59:00.177022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.775 [2024-10-01 15:59:00.177041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.775 [2024-10-01 15:59:00.177053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.775 [2024-10-01 15:59:00.177197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.775 [2024-10-01 15:59:00.177556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.775 [2024-10-01 15:59:00.177568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.775 [2024-10-01 15:59:00.177574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.775 [2024-10-01 15:59:00.177618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.775 [2024-10-01 15:59:00.187220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.775 [2024-10-01 15:59:00.187474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.775 [2024-10-01 15:59:00.187489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.775 [2024-10-01 15:59:00.187497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.775 [2024-10-01 15:59:00.187509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.775 [2024-10-01 15:59:00.187520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.775 [2024-10-01 15:59:00.187526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.776 [2024-10-01 15:59:00.187532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.776 [2024-10-01 15:59:00.187545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.776 [2024-10-01 15:59:00.198521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.776 [2024-10-01 15:59:00.198814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.776 [2024-10-01 15:59:00.198830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.776 [2024-10-01 15:59:00.198838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.776 [2024-10-01 15:59:00.198850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.776 [2024-10-01 15:59:00.198866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.776 [2024-10-01 15:59:00.198873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.776 [2024-10-01 15:59:00.198880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.776 [2024-10-01 15:59:00.198894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.776 [2024-10-01 15:59:00.210204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.776 [2024-10-01 15:59:00.210466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.776 [2024-10-01 15:59:00.210484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.776 [2024-10-01 15:59:00.210492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.776 [2024-10-01 15:59:00.210645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.776 [2024-10-01 15:59:00.210674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.776 [2024-10-01 15:59:00.210685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.776 [2024-10-01 15:59:00.210692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.776 [2024-10-01 15:59:00.210705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.776 [2024-10-01 15:59:00.221275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.776 [2024-10-01 15:59:00.221526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.776 [2024-10-01 15:59:00.221543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.776 [2024-10-01 15:59:00.221551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.776 [2024-10-01 15:59:00.221681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.776 [2024-10-01 15:59:00.221712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.776 [2024-10-01 15:59:00.221719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.776 [2024-10-01 15:59:00.221725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.776 [2024-10-01 15:59:00.221739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.776 [2024-10-01 15:59:00.232894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.776 [2024-10-01 15:59:00.233034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.776 [2024-10-01 15:59:00.233049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.776 [2024-10-01 15:59:00.233057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.776 [2024-10-01 15:59:00.233068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.776 [2024-10-01 15:59:00.233079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.776 [2024-10-01 15:59:00.233085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.776 [2024-10-01 15:59:00.233091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.776 [2024-10-01 15:59:00.233104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.776 [2024-10-01 15:59:00.244986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.776 [2024-10-01 15:59:00.245226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.776 [2024-10-01 15:59:00.245242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.776 [2024-10-01 15:59:00.245250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.776 [2024-10-01 15:59:00.245263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.776 [2024-10-01 15:59:00.245437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.776 [2024-10-01 15:59:00.245446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.776 [2024-10-01 15:59:00.245453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.776 [2024-10-01 15:59:00.245647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.776 [2024-10-01 15:59:00.255052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.776 [2024-10-01 15:59:00.255179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.776 [2024-10-01 15:59:00.255193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.776 [2024-10-01 15:59:00.255200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.776 [2024-10-01 15:59:00.255354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.776 [2024-10-01 15:59:00.255385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.776 [2024-10-01 15:59:00.255393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.776 [2024-10-01 15:59:00.255400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.776 [2024-10-01 15:59:00.255414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.776 [2024-10-01 15:59:00.266394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.776 [2024-10-01 15:59:00.266588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.776 [2024-10-01 15:59:00.266604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.776 [2024-10-01 15:59:00.266612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.776 [2024-10-01 15:59:00.266624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.776 [2024-10-01 15:59:00.266638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.776 [2024-10-01 15:59:00.266645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.776 [2024-10-01 15:59:00.266651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.776 [2024-10-01 15:59:00.266664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.776 [2024-10-01 15:59:00.277447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.776 [2024-10-01 15:59:00.277616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.776 [2024-10-01 15:59:00.277630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.776 [2024-10-01 15:59:00.277638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.776 [2024-10-01 15:59:00.277650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.776 [2024-10-01 15:59:00.277661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.776 [2024-10-01 15:59:00.277668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.776 [2024-10-01 15:59:00.277674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.776 [2024-10-01 15:59:00.277687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.776 [2024-10-01 15:59:00.289258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.776 [2024-10-01 15:59:00.289633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.777 [2024-10-01 15:59:00.289651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.777 [2024-10-01 15:59:00.289659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.777 [2024-10-01 15:59:00.289836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.777 [2024-10-01 15:59:00.289884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.777 [2024-10-01 15:59:00.289893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.777 [2024-10-01 15:59:00.289899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.777 [2024-10-01 15:59:00.289913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.777 [2024-10-01 15:59:00.300818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.777 [2024-10-01 15:59:00.301026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.777 [2024-10-01 15:59:00.301044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.777 [2024-10-01 15:59:00.301052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.777 [2024-10-01 15:59:00.301183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.777 [2024-10-01 15:59:00.301212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.777 [2024-10-01 15:59:00.301220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.777 [2024-10-01 15:59:00.301226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.777 [2024-10-01 15:59:00.301240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.777 [2024-10-01 15:59:00.311169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.777 [2024-10-01 15:59:00.311343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.777 [2024-10-01 15:59:00.311358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.777 [2024-10-01 15:59:00.311365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.777 [2024-10-01 15:59:00.311377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.777 [2024-10-01 15:59:00.311397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.777 [2024-10-01 15:59:00.311403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.777 [2024-10-01 15:59:00.311410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.777 [2024-10-01 15:59:00.311423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.777 [2024-10-01 15:59:00.323943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.777 [2024-10-01 15:59:00.324120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.777 [2024-10-01 15:59:00.324134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.777 [2024-10-01 15:59:00.324142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.777 [2024-10-01 15:59:00.324154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.777 [2024-10-01 15:59:00.324166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.777 [2024-10-01 15:59:00.324172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.777 [2024-10-01 15:59:00.324183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.777 [2024-10-01 15:59:00.324196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.777 [2024-10-01 15:59:00.335962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.777 [2024-10-01 15:59:00.336241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.777 [2024-10-01 15:59:00.336260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.777 [2024-10-01 15:59:00.336268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.777 [2024-10-01 15:59:00.336541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.777 [2024-10-01 15:59:00.336573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.777 [2024-10-01 15:59:00.336581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.777 [2024-10-01 15:59:00.336588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.777 [2024-10-01 15:59:00.336602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.777 [2024-10-01 15:59:00.346029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.777 [2024-10-01 15:59:00.346158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.777 [2024-10-01 15:59:00.346173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.777 [2024-10-01 15:59:00.346180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.777 [2024-10-01 15:59:00.346192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.777 [2024-10-01 15:59:00.346202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.777 [2024-10-01 15:59:00.346208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.777 [2024-10-01 15:59:00.346215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.777 [2024-10-01 15:59:00.346227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.777 [2024-10-01 15:59:00.356483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.777 [2024-10-01 15:59:00.356685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.777 [2024-10-01 15:59:00.356700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.777 [2024-10-01 15:59:00.356708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.777 [2024-10-01 15:59:00.356720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.777 [2024-10-01 15:59:00.356731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.777 [2024-10-01 15:59:00.356737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.777 [2024-10-01 15:59:00.356744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.777 [2024-10-01 15:59:00.356757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.777 [2024-10-01 15:59:00.366594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.777 [2024-10-01 15:59:00.366712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.777 [2024-10-01 15:59:00.366730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.777 [2024-10-01 15:59:00.366738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.777 [2024-10-01 15:59:00.366750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.777 [2024-10-01 15:59:00.366760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.777 [2024-10-01 15:59:00.366766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.777 [2024-10-01 15:59:00.366773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.777 [2024-10-01 15:59:00.366786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.777 [2024-10-01 15:59:00.377109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.777 [2024-10-01 15:59:00.377234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.777 [2024-10-01 15:59:00.377249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.777 [2024-10-01 15:59:00.377256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.778 [2024-10-01 15:59:00.377386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.778 [2024-10-01 15:59:00.377416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.778 [2024-10-01 15:59:00.377424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.778 [2024-10-01 15:59:00.377430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.778 [2024-10-01 15:59:00.377443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.778 [2024-10-01 15:59:00.388265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.778 [2024-10-01 15:59:00.388439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.778 [2024-10-01 15:59:00.388453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.778 [2024-10-01 15:59:00.388460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.778 [2024-10-01 15:59:00.388472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.778 [2024-10-01 15:59:00.388491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.778 [2024-10-01 15:59:00.388498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.778 [2024-10-01 15:59:00.388505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.778 [2024-10-01 15:59:00.388517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.778 [2024-10-01 15:59:00.401290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.778 [2024-10-01 15:59:00.401664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.778 [2024-10-01 15:59:00.401682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.778 [2024-10-01 15:59:00.401690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.778 [2024-10-01 15:59:00.402044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.778 [2024-10-01 15:59:00.402209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.778 [2024-10-01 15:59:00.402219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.778 [2024-10-01 15:59:00.402226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.778 [2024-10-01 15:59:00.402256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.778 [2024-10-01 15:59:00.413128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.778 [2024-10-01 15:59:00.413458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.778 [2024-10-01 15:59:00.413477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.778 [2024-10-01 15:59:00.413485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.778 [2024-10-01 15:59:00.413626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.778 [2024-10-01 15:59:00.413666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.778 [2024-10-01 15:59:00.413674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.778 [2024-10-01 15:59:00.413681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.778 [2024-10-01 15:59:00.413694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.778 [2024-10-01 15:59:00.423417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.778 [2024-10-01 15:59:00.423625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.778 [2024-10-01 15:59:00.423640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.778 [2024-10-01 15:59:00.423648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.778 [2024-10-01 15:59:00.423661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.778 [2024-10-01 15:59:00.423672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.778 [2024-10-01 15:59:00.423678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.778 [2024-10-01 15:59:00.423684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.778 [2024-10-01 15:59:00.423698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.778 [2024-10-01 15:59:00.436337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.778 [2024-10-01 15:59:00.436629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.778 [2024-10-01 15:59:00.436647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.778 [2024-10-01 15:59:00.436655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.778 [2024-10-01 15:59:00.437010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.778 [2024-10-01 15:59:00.437168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.778 [2024-10-01 15:59:00.437179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.778 [2024-10-01 15:59:00.437186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.778 [2024-10-01 15:59:00.437229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.778 [2024-10-01 15:59:00.447466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.778 [2024-10-01 15:59:00.447707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.778 [2024-10-01 15:59:00.447723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.778 [2024-10-01 15:59:00.447730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.778 [2024-10-01 15:59:00.447743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.778 [2024-10-01 15:59:00.447754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.778 [2024-10-01 15:59:00.447760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.778 [2024-10-01 15:59:00.447767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.778 [2024-10-01 15:59:00.447779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.778 [2024-10-01 15:59:00.459039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.778 [2024-10-01 15:59:00.459169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.778 [2024-10-01 15:59:00.459183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.778 [2024-10-01 15:59:00.459191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.778 [2024-10-01 15:59:00.459202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.779 [2024-10-01 15:59:00.459213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.779 [2024-10-01 15:59:00.459220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.779 [2024-10-01 15:59:00.459226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.779 [2024-10-01 15:59:00.459238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.779 [2024-10-01 15:59:00.469764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.779 [2024-10-01 15:59:00.469968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.779 [2024-10-01 15:59:00.469984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.779 [2024-10-01 15:59:00.469991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.779 [2024-10-01 15:59:00.470003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.779 [2024-10-01 15:59:00.470014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.779 [2024-10-01 15:59:00.470021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.779 [2024-10-01 15:59:00.470028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.779 [2024-10-01 15:59:00.470041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.779 [2024-10-01 15:59:00.482063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.779 [2024-10-01 15:59:00.482521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.779 [2024-10-01 15:59:00.482540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.779 [2024-10-01 15:59:00.482551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.779 [2024-10-01 15:59:00.482714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.779 [2024-10-01 15:59:00.482748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.779 [2024-10-01 15:59:00.482756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.779 [2024-10-01 15:59:00.482762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.779 [2024-10-01 15:59:00.482776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.779 [2024-10-01 15:59:00.493817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.779 [2024-10-01 15:59:00.494027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.779 [2024-10-01 15:59:00.494050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.779 [2024-10-01 15:59:00.494058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.779 [2024-10-01 15:59:00.494071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.779 [2024-10-01 15:59:00.494082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.779 [2024-10-01 15:59:00.494088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.779 [2024-10-01 15:59:00.494095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.779 [2024-10-01 15:59:00.494108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.779 [2024-10-01 15:59:00.506809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.779 [2024-10-01 15:59:00.507165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.779 [2024-10-01 15:59:00.507184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.779 [2024-10-01 15:59:00.507192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.779 [2024-10-01 15:59:00.507365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.779 [2024-10-01 15:59:00.507511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.779 [2024-10-01 15:59:00.507522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.779 [2024-10-01 15:59:00.507529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.779 [2024-10-01 15:59:00.507560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.779 [2024-10-01 15:59:00.518056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.779 [2024-10-01 15:59:00.518369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.779 [2024-10-01 15:59:00.518388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.779 [2024-10-01 15:59:00.518395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.779 [2024-10-01 15:59:00.518425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.779 [2024-10-01 15:59:00.518437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.779 [2024-10-01 15:59:00.518446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.779 [2024-10-01 15:59:00.518453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.779 [2024-10-01 15:59:00.518466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.779 [2024-10-01 15:59:00.528124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.779 [2024-10-01 15:59:00.528303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.779 [2024-10-01 15:59:00.528317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.779 [2024-10-01 15:59:00.528325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.779 [2024-10-01 15:59:00.529145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.779 [2024-10-01 15:59:00.529662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.779 [2024-10-01 15:59:00.529674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.779 [2024-10-01 15:59:00.529681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.779 [2024-10-01 15:59:00.529844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.779 [2024-10-01 15:59:00.539737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.779 [2024-10-01 15:59:00.539854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.779 [2024-10-01 15:59:00.539874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.779 [2024-10-01 15:59:00.539882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.779 [2024-10-01 15:59:00.539894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.779 [2024-10-01 15:59:00.539905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.779 [2024-10-01 15:59:00.539911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.779 [2024-10-01 15:59:00.539918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.779 [2024-10-01 15:59:00.539930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.779 [2024-10-01 15:59:00.552148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.779 [2024-10-01 15:59:00.552457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.779 [2024-10-01 15:59:00.552475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.779 [2024-10-01 15:59:00.552483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.779 [2024-10-01 15:59:00.552625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.779 [2024-10-01 15:59:00.552651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.779 [2024-10-01 15:59:00.552659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.779 [2024-10-01 15:59:00.552666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.779 [2024-10-01 15:59:00.552679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.779 [2024-10-01 15:59:00.562854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.779 [2024-10-01 15:59:00.562976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.779 [2024-10-01 15:59:00.562991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.779 [2024-10-01 15:59:00.562999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.779 [2024-10-01 15:59:00.563010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.779 [2024-10-01 15:59:00.563021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.779 [2024-10-01 15:59:00.563027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.779 [2024-10-01 15:59:00.563034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.779 [2024-10-01 15:59:00.563047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.779 [2024-10-01 15:59:00.574437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.779 [2024-10-01 15:59:00.574646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.779 [2024-10-01 15:59:00.574661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.780 [2024-10-01 15:59:00.574668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.780 [2024-10-01 15:59:00.574680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.780 [2024-10-01 15:59:00.574691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.780 [2024-10-01 15:59:00.574697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.780 [2024-10-01 15:59:00.574704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.780 [2024-10-01 15:59:00.574717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.780 [2024-10-01 15:59:00.586085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.780 [2024-10-01 15:59:00.586211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.780 [2024-10-01 15:59:00.586226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.780 [2024-10-01 15:59:00.586233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.780 [2024-10-01 15:59:00.586245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.780 [2024-10-01 15:59:00.586256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.780 [2024-10-01 15:59:00.586262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.780 [2024-10-01 15:59:00.586269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.780 [2024-10-01 15:59:00.586281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.780 [2024-10-01 15:59:00.596536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.780 [2024-10-01 15:59:00.596801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.780 [2024-10-01 15:59:00.596816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.780 [2024-10-01 15:59:00.596824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.780 [2024-10-01 15:59:00.596840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.780 [2024-10-01 15:59:00.596851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.780 [2024-10-01 15:59:00.596857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.780 [2024-10-01 15:59:00.596869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.780 [2024-10-01 15:59:00.596882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.780 [2024-10-01 15:59:00.607515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.780 [2024-10-01 15:59:00.607771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.780 [2024-10-01 15:59:00.607787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.780 [2024-10-01 15:59:00.607795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.780 [2024-10-01 15:59:00.607913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.780 [2024-10-01 15:59:00.608024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.780 [2024-10-01 15:59:00.608033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.780 [2024-10-01 15:59:00.608040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.780 [2024-10-01 15:59:00.608068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.780 [2024-10-01 15:59:00.618629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.780 [2024-10-01 15:59:00.618799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.780 [2024-10-01 15:59:00.618812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.780 [2024-10-01 15:59:00.618819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.780 [2024-10-01 15:59:00.618831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.780 [2024-10-01 15:59:00.618842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.780 [2024-10-01 15:59:00.618848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.780 [2024-10-01 15:59:00.618855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.780 [2024-10-01 15:59:00.618880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.780 [2024-10-01 15:59:00.630122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.780 [2024-10-01 15:59:00.630295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.780 [2024-10-01 15:59:00.630309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.780 [2024-10-01 15:59:00.630316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.780 [2024-10-01 15:59:00.630328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.780 [2024-10-01 15:59:00.630339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.780 [2024-10-01 15:59:00.630345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.780 [2024-10-01 15:59:00.630355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.780 [2024-10-01 15:59:00.630367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.780 [2024-10-01 15:59:00.642622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.780 [2024-10-01 15:59:00.642843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.780 [2024-10-01 15:59:00.642859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.780 [2024-10-01 15:59:00.642872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.780 [2024-10-01 15:59:00.642884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.780 [2024-10-01 15:59:00.642895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.780 [2024-10-01 15:59:00.642902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.780 [2024-10-01 15:59:00.642909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.780 [2024-10-01 15:59:00.642922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.780 [2024-10-01 15:59:00.653837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.780 [2024-10-01 15:59:00.654089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.780 [2024-10-01 15:59:00.654106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.780 [2024-10-01 15:59:00.654113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.780 [2024-10-01 15:59:00.654243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.780 [2024-10-01 15:59:00.654272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.780 [2024-10-01 15:59:00.654279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.780 [2024-10-01 15:59:00.654286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.780 [2024-10-01 15:59:00.654412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.780 [2024-10-01 15:59:00.664354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.780 [2024-10-01 15:59:00.664569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.780 [2024-10-01 15:59:00.664583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.780 [2024-10-01 15:59:00.664591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.780 [2024-10-01 15:59:00.664603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.780 [2024-10-01 15:59:00.664614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.780 [2024-10-01 15:59:00.664620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.780 [2024-10-01 15:59:00.664627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.780 [2024-10-01 15:59:00.664640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.780 [2024-10-01 15:59:00.677097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.780 [2024-10-01 15:59:00.677339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.780 [2024-10-01 15:59:00.677357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.781 [2024-10-01 15:59:00.677365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.781 [2024-10-01 15:59:00.677377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.781 [2024-10-01 15:59:00.677388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.781 [2024-10-01 15:59:00.677394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.781 [2024-10-01 15:59:00.677400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.781 [2024-10-01 15:59:00.677413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.781 [2024-10-01 15:59:00.687889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.781 [2024-10-01 15:59:00.688031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.781 [2024-10-01 15:59:00.688045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.781 [2024-10-01 15:59:00.688053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.781 [2024-10-01 15:59:00.688065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.781 [2024-10-01 15:59:00.688075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.781 [2024-10-01 15:59:00.688082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.781 [2024-10-01 15:59:00.688088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.781 [2024-10-01 15:59:00.688101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.781 [2024-10-01 15:59:00.699983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.781 [2024-10-01 15:59:00.700243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.781 [2024-10-01 15:59:00.700258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.781 [2024-10-01 15:59:00.700266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.781 [2024-10-01 15:59:00.700278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.781 [2024-10-01 15:59:00.700289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.781 [2024-10-01 15:59:00.700295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.781 [2024-10-01 15:59:00.700302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.781 [2024-10-01 15:59:00.700315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.781 [2024-10-01 15:59:00.712602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.781 [2024-10-01 15:59:00.712966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.781 [2024-10-01 15:59:00.712984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.781 [2024-10-01 15:59:00.712992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.781 [2024-10-01 15:59:00.713167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.781 [2024-10-01 15:59:00.713202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.781 [2024-10-01 15:59:00.713210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.781 [2024-10-01 15:59:00.713217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.781 [2024-10-01 15:59:00.713345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.781 [2024-10-01 15:59:00.723760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.781 [2024-10-01 15:59:00.724001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.781 [2024-10-01 15:59:00.724018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.781 [2024-10-01 15:59:00.724026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.781 [2024-10-01 15:59:00.724158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.781 [2024-10-01 15:59:00.724189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.781 [2024-10-01 15:59:00.724196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.781 [2024-10-01 15:59:00.724203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.781 [2024-10-01 15:59:00.724217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.781 [2024-10-01 15:59:00.734332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.781 [2024-10-01 15:59:00.734566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.781 [2024-10-01 15:59:00.734583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.781 [2024-10-01 15:59:00.734591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.781 [2024-10-01 15:59:00.734752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.781 [2024-10-01 15:59:00.734903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.781 [2024-10-01 15:59:00.734914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.781 [2024-10-01 15:59:00.734921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.781 [2024-10-01 15:59:00.734952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.781 [2024-10-01 15:59:00.745203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.781 [2024-10-01 15:59:00.745503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.781 [2024-10-01 15:59:00.745521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.781 [2024-10-01 15:59:00.745530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.781 [2024-10-01 15:59:00.745559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.781 [2024-10-01 15:59:00.745571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.781 [2024-10-01 15:59:00.745578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.781 [2024-10-01 15:59:00.745585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.781 [2024-10-01 15:59:00.745718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.781 [2024-10-01 15:59:00.756215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.781 [2024-10-01 15:59:00.756379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.781 [2024-10-01 15:59:00.756394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.781 [2024-10-01 15:59:00.756402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.781 [2024-10-01 15:59:00.756414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.781 [2024-10-01 15:59:00.756426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.781 [2024-10-01 15:59:00.756434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.781 [2024-10-01 15:59:00.756440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.781 [2024-10-01 15:59:00.756454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.781 [2024-10-01 15:59:00.767368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.781 [2024-10-01 15:59:00.767549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.781 [2024-10-01 15:59:00.767564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.781 [2024-10-01 15:59:00.767571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.781 [2024-10-01 15:59:00.767699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.781 [2024-10-01 15:59:00.767729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.782 [2024-10-01 15:59:00.767736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.782 [2024-10-01 15:59:00.767743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.782 [2024-10-01 15:59:00.767756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.782 [2024-10-01 15:59:00.778078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.782 [2024-10-01 15:59:00.778322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.782 [2024-10-01 15:59:00.778337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.782 [2024-10-01 15:59:00.778344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.782 [2024-10-01 15:59:00.778473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.782 [2024-10-01 15:59:00.778503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.782 [2024-10-01 15:59:00.778510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.782 [2024-10-01 15:59:00.778517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.782 [2024-10-01 15:59:00.778644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.782 [2024-10-01 15:59:00.789076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.782 [2024-10-01 15:59:00.789321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.782 [2024-10-01 15:59:00.789336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.782 [2024-10-01 15:59:00.789346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.782 [2024-10-01 15:59:00.789359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.782 [2024-10-01 15:59:00.789369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.782 [2024-10-01 15:59:00.789375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.782 [2024-10-01 15:59:00.789381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.782 [2024-10-01 15:59:00.789394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.782 [2024-10-01 15:59:00.799198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.782 [2024-10-01 15:59:00.799446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.782 [2024-10-01 15:59:00.799461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.782 [2024-10-01 15:59:00.799469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.782 [2024-10-01 15:59:00.799481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.782 [2024-10-01 15:59:00.799492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.782 [2024-10-01 15:59:00.799498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.782 [2024-10-01 15:59:00.799505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.782 [2024-10-01 15:59:00.799517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.782 [2024-10-01 15:59:00.810273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.782 [2024-10-01 15:59:00.810597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.782 [2024-10-01 15:59:00.810614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.782 [2024-10-01 15:59:00.810622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.782 [2024-10-01 15:59:00.810650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.782 [2024-10-01 15:59:00.810663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.782 [2024-10-01 15:59:00.810669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.782 [2024-10-01 15:59:00.810675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.782 [2024-10-01 15:59:00.810689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.782 [2024-10-01 15:59:00.822303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.782 [2024-10-01 15:59:00.822663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.782 [2024-10-01 15:59:00.822682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.782 [2024-10-01 15:59:00.822690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.782 [2024-10-01 15:59:00.822873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.782 [2024-10-01 15:59:00.822917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.782 [2024-10-01 15:59:00.822929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.782 [2024-10-01 15:59:00.822936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.782 [2024-10-01 15:59:00.822950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.782 [2024-10-01 15:59:00.833181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.782 [2024-10-01 15:59:00.833286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.782 [2024-10-01 15:59:00.833301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.782 [2024-10-01 15:59:00.833308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.782 [2024-10-01 15:59:00.833320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.782 [2024-10-01 15:59:00.833331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.782 [2024-10-01 15:59:00.833338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.782 [2024-10-01 15:59:00.833345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.782 [2024-10-01 15:59:00.833357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.782 [2024-10-01 15:59:00.845786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.782 [2024-10-01 15:59:00.846009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.782 [2024-10-01 15:59:00.846026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.782 [2024-10-01 15:59:00.846033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.782 [2024-10-01 15:59:00.846047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.782 [2024-10-01 15:59:00.846057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.782 [2024-10-01 15:59:00.846064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.782 [2024-10-01 15:59:00.846070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.782 [2024-10-01 15:59:00.846083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.782 [2024-10-01 15:59:00.856435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.782 [2024-10-01 15:59:00.856640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.782 [2024-10-01 15:59:00.856656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.782 [2024-10-01 15:59:00.856663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.782 [2024-10-01 15:59:00.856675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.782 [2024-10-01 15:59:00.856686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.782 [2024-10-01 15:59:00.856692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.783 [2024-10-01 15:59:00.856699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.783 [2024-10-01 15:59:00.856712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.783 [2024-10-01 15:59:00.867628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.783 [2024-10-01 15:59:00.867989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.783 [2024-10-01 15:59:00.868007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.783 [2024-10-01 15:59:00.868015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.783 [2024-10-01 15:59:00.868158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.783 [2024-10-01 15:59:00.868199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.783 [2024-10-01 15:59:00.868207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.783 [2024-10-01 15:59:00.868214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.783 [2024-10-01 15:59:00.868342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.783 [2024-10-01 15:59:00.878558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.783 [2024-10-01 15:59:00.878795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.783 [2024-10-01 15:59:00.878810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.783 [2024-10-01 15:59:00.878818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.783 [2024-10-01 15:59:00.878831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.783 [2024-10-01 15:59:00.878841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.783 [2024-10-01 15:59:00.878847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.783 [2024-10-01 15:59:00.878853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.783 [2024-10-01 15:59:00.878872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.783 [2024-10-01 15:59:00.890223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.783 [2024-10-01 15:59:00.890602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.783 [2024-10-01 15:59:00.890620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.783 [2024-10-01 15:59:00.890628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.783 [2024-10-01 15:59:00.890656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.783 [2024-10-01 15:59:00.890667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.783 [2024-10-01 15:59:00.890674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.783 [2024-10-01 15:59:00.890680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.783 [2024-10-01 15:59:00.890694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.783 [2024-10-01 15:59:00.901974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.783 [2024-10-01 15:59:00.902316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.783 [2024-10-01 15:59:00.902334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.783 [2024-10-01 15:59:00.902342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.783 [2024-10-01 15:59:00.902492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.783 [2024-10-01 15:59:00.902520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.783 [2024-10-01 15:59:00.902527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.783 [2024-10-01 15:59:00.902534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.783 [2024-10-01 15:59:00.902547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.783 [2024-10-01 15:59:00.912959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.783 [2024-10-01 15:59:00.913071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.783 [2024-10-01 15:59:00.913085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.783 [2024-10-01 15:59:00.913092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.783 [2024-10-01 15:59:00.913104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.783 [2024-10-01 15:59:00.913115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.783 [2024-10-01 15:59:00.913121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.783 [2024-10-01 15:59:00.913127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.783 [2024-10-01 15:59:00.913139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.783 [2024-10-01 15:59:00.924794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.783 [2024-10-01 15:59:00.924967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.783 [2024-10-01 15:59:00.924982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.783 [2024-10-01 15:59:00.924989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.783 [2024-10-01 15:59:00.925001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.783 [2024-10-01 15:59:00.925011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.783 [2024-10-01 15:59:00.925017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.783 [2024-10-01 15:59:00.925024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.783 [2024-10-01 15:59:00.925037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.783 [2024-10-01 15:59:00.934859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.783 [2024-10-01 15:59:00.935088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.783 [2024-10-01 15:59:00.935102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.783 [2024-10-01 15:59:00.935110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.783 [2024-10-01 15:59:00.935121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.783 [2024-10-01 15:59:00.935132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.783 [2024-10-01 15:59:00.935138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.783 [2024-10-01 15:59:00.935148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.783 [2024-10-01 15:59:00.935161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.783 [2024-10-01 15:59:00.946302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.784 [2024-10-01 15:59:00.946461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.784 [2024-10-01 15:59:00.946475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.784 [2024-10-01 15:59:00.946482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.784 [2024-10-01 15:59:00.946494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.784 [2024-10-01 15:59:00.946504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.784 [2024-10-01 15:59:00.946511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.784 [2024-10-01 15:59:00.946518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.784 [2024-10-01 15:59:00.946531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.784 [2024-10-01 15:59:00.956367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.784 [2024-10-01 15:59:00.956615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.784 [2024-10-01 15:59:00.956630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.784 [2024-10-01 15:59:00.956637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.784 [2024-10-01 15:59:00.956650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.784 [2024-10-01 15:59:00.956661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.784 [2024-10-01 15:59:00.956667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.784 [2024-10-01 15:59:00.956673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.784 [2024-10-01 15:59:00.956687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.784 [2024-10-01 15:59:00.967036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.784 [2024-10-01 15:59:00.967262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.784 [2024-10-01 15:59:00.967278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.784 [2024-10-01 15:59:00.967286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.784 [2024-10-01 15:59:00.968096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.784 [2024-10-01 15:59:00.968607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.784 [2024-10-01 15:59:00.968620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.784 [2024-10-01 15:59:00.968627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.784 [2024-10-01 15:59:00.968904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.784 [2024-10-01 15:59:00.979186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.784 [2024-10-01 15:59:00.979581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.784 [2024-10-01 15:59:00.979599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.784 [2024-10-01 15:59:00.979607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.784 [2024-10-01 15:59:00.979751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.784 [2024-10-01 15:59:00.979780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.784 [2024-10-01 15:59:00.979788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.784 [2024-10-01 15:59:00.979795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.784 [2024-10-01 15:59:00.979809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.784 11362.33 IOPS, 44.38 MiB/s [2024-10-01 15:59:00.992547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.784 [2024-10-01 15:59:00.992795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.784 [2024-10-01 15:59:00.992812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.784 [2024-10-01 15:59:00.992820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.784 [2024-10-01 15:59:00.992832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.784 [2024-10-01 15:59:00.992844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.784 [2024-10-01 15:59:00.992850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.784 [2024-10-01 15:59:00.992858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.784 [2024-10-01 15:59:00.992879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.784 [2024-10-01 15:59:01.002613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.784 [2024-10-01 15:59:01.002904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.784 [2024-10-01 15:59:01.002920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.784 [2024-10-01 15:59:01.002928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.784 [2024-10-01 15:59:01.002941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.784 [2024-10-01 15:59:01.002951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.784 [2024-10-01 15:59:01.002958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.784 [2024-10-01 15:59:01.002964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.784 [2024-10-01 15:59:01.002977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.784 [2024-10-01 15:59:01.013968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.784 [2024-10-01 15:59:01.014136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.784 [2024-10-01 15:59:01.014150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.784 [2024-10-01 15:59:01.014158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.784 [2024-10-01 15:59:01.014170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.784 [2024-10-01 15:59:01.014185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.784 [2024-10-01 15:59:01.014191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.784 [2024-10-01 15:59:01.014197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.784 [2024-10-01 15:59:01.014210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.784 [2024-10-01 15:59:01.025738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.784 [2024-10-01 15:59:01.025868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.784 [2024-10-01 15:59:01.025883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.784 [2024-10-01 15:59:01.025890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.784 [2024-10-01 15:59:01.026140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.784 [2024-10-01 15:59:01.026283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.784 [2024-10-01 15:59:01.026293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.784 [2024-10-01 15:59:01.026300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.784 [2024-10-01 15:59:01.026439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.784 [2024-10-01 15:59:01.027678] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x985f50 was disconnected and freed. reset controller. 00:24:57.784 [2024-10-01 15:59:01.027702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.785 [2024-10-01 15:59:01.027732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.785 [2024-10-01 15:59:01.033535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:57.785 [2024-10-01 15:59:01.033557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.785 [2024-10-01 15:59:01.033572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4422 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:57.785 [2024-10-01 15:59:01.033579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:24:57.785 [2024-10-01 15:59:01.033586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:57.785 [2024-10-01 15:59:01.033592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xabf070 00:24:57.785 [2024-10-01 15:59:01.037297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.785 [2024-10-01 15:59:01.037815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.785 [2024-10-01 15:59:01.037829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.785 [2024-10-01 15:59:01.037836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.785 [2024-10-01 15:59:01.037913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.785 [2024-10-01 15:59:01.037925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.785 [2024-10-01 15:59:01.038177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.785 [2024-10-01 15:59:01.038194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.785 [2024-10-01 15:59:01.038201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.785 [2024-10-01 15:59:01.038213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.785 [2024-10-01 15:59:01.038231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.785 [2024-10-01 15:59:01.038237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.785 [2024-10-01 15:59:01.038244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.785 [2024-10-01 15:59:01.038257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.785 [2024-10-01 15:59:01.038266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.785 [2024-10-01 15:59:01.038739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.785 [2024-10-01 15:59:01.038755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.785 [2024-10-01 15:59:01.038762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.785 [2024-10-01 15:59:01.039117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.785 [2024-10-01 15:59:01.039310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.785 [2024-10-01 15:59:01.039321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.785 [2024-10-01 15:59:01.039328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.785 [2024-10-01 15:59:01.039473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.785 [2024-10-01 15:59:01.048632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.785 [2024-10-01 15:59:01.048653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.785 [2024-10-01 15:59:01.048875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.785 [2024-10-01 15:59:01.048888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.785 [2024-10-01 15:59:01.048896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.785 [2024-10-01 15:59:01.048985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.785 [2024-10-01 15:59:01.048996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.785 [2024-10-01 15:59:01.049002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.785 [2024-10-01 15:59:01.049014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.785 [2024-10-01 15:59:01.049023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.785 [2024-10-01 15:59:01.049033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.785 [2024-10-01 15:59:01.049039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.785 [2024-10-01 15:59:01.049045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.785 [2024-10-01 15:59:01.049054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.785 [2024-10-01 15:59:01.049063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.785 [2024-10-01 15:59:01.049069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.785 [2024-10-01 15:59:01.049083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.785 [2024-10-01 15:59:01.049089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.785 [2024-10-01 15:59:01.061318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.785 [2024-10-01 15:59:01.061341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.785 [2024-10-01 15:59:01.061742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.785 [2024-10-01 15:59:01.061761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.785 [2024-10-01 15:59:01.061769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.785 [2024-10-01 15:59:01.061910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.785 [2024-10-01 15:59:01.061921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.785 [2024-10-01 15:59:01.061928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.785 [2024-10-01 15:59:01.062568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.785 [2024-10-01 15:59:01.062585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.785 [2024-10-01 15:59:01.062963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.785 [2024-10-01 15:59:01.062975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.785 [2024-10-01 15:59:01.062982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.785 [2024-10-01 15:59:01.062991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.785 [2024-10-01 15:59:01.062997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.785 [2024-10-01 15:59:01.063003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.785 [2024-10-01 15:59:01.063053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.785 [2024-10-01 15:59:01.063061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.785 [2024-10-01 15:59:01.072948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.785 [2024-10-01 15:59:01.072971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.785 [2024-10-01 15:59:01.073265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.785 [2024-10-01 15:59:01.073281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.785 [2024-10-01 15:59:01.073289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.785 [2024-10-01 15:59:01.073457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.785 [2024-10-01 15:59:01.073468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.785 [2024-10-01 15:59:01.073475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.785 [2024-10-01 15:59:01.073619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.785 [2024-10-01 15:59:01.073639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.785 [2024-10-01 15:59:01.073777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.786 [2024-10-01 15:59:01.073788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.786 [2024-10-01 15:59:01.073794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.786 [2024-10-01 15:59:01.073804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.786 [2024-10-01 15:59:01.073810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.786 [2024-10-01 15:59:01.073816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.786 [2024-10-01 15:59:01.073846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.786 [2024-10-01 15:59:01.073853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.786 [2024-10-01 15:59:01.083032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.786 [2024-10-01 15:59:01.083062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.786 [2024-10-01 15:59:01.083214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.786 [2024-10-01 15:59:01.083226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.786 [2024-10-01 15:59:01.083233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.786 [2024-10-01 15:59:01.083716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.786 [2024-10-01 15:59:01.083736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.786 [2024-10-01 15:59:01.083745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.786 [2024-10-01 15:59:01.083756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.786 [2024-10-01 15:59:01.084028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.786 [2024-10-01 15:59:01.084041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.786 [2024-10-01 15:59:01.084047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.786 [2024-10-01 15:59:01.084054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.786 [2024-10-01 15:59:01.084206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.786 [2024-10-01 15:59:01.084216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.786 [2024-10-01 15:59:01.084222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.786 [2024-10-01 15:59:01.084229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.786 [2024-10-01 15:59:01.084258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.786 [2024-10-01 15:59:01.094637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.786 [2024-10-01 15:59:01.094659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.786 [2024-10-01 15:59:01.095021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.786 [2024-10-01 15:59:01.095040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.786 [2024-10-01 15:59:01.095048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.786 [2024-10-01 15:59:01.095217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.786 [2024-10-01 15:59:01.095227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.786 [2024-10-01 15:59:01.095234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.786 [2024-10-01 15:59:01.095378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.786 [2024-10-01 15:59:01.095391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.786 [2024-10-01 15:59:01.095528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.786 [2024-10-01 15:59:01.095537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.786 [2024-10-01 15:59:01.095543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.786 [2024-10-01 15:59:01.095552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.786 [2024-10-01 15:59:01.095558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.786 [2024-10-01 15:59:01.095565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.786 [2024-10-01 15:59:01.095594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.786 [2024-10-01 15:59:01.095601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.786 [2024-10-01 15:59:01.105807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.786 [2024-10-01 15:59:01.105828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.786 [2024-10-01 15:59:01.105945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.786 [2024-10-01 15:59:01.105959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.786 [2024-10-01 15:59:01.105966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.786 [2024-10-01 15:59:01.106111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.786 [2024-10-01 15:59:01.106120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.786 [2024-10-01 15:59:01.106127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.786 [2024-10-01 15:59:01.106138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.786 [2024-10-01 15:59:01.106147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.786 [2024-10-01 15:59:01.106157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.786 [2024-10-01 15:59:01.106163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.786 [2024-10-01 15:59:01.106169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.786 [2024-10-01 15:59:01.106177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.786 [2024-10-01 15:59:01.106183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.786 [2024-10-01 15:59:01.106192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.786 [2024-10-01 15:59:01.106205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.786 [2024-10-01 15:59:01.106212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.786 [2024-10-01 15:59:01.118410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.786 [2024-10-01 15:59:01.118433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.786 [2024-10-01 15:59:01.118734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.786 [2024-10-01 15:59:01.118751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.786 [2024-10-01 15:59:01.118759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.786 [2024-10-01 15:59:01.118888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.786 [2024-10-01 15:59:01.118899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.786 [2024-10-01 15:59:01.118906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.786 [2024-10-01 15:59:01.119255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.786 [2024-10-01 15:59:01.119269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.786 [2024-10-01 15:59:01.119426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.786 [2024-10-01 15:59:01.119436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.786 [2024-10-01 15:59:01.119443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.786 [2024-10-01 15:59:01.119452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.786 [2024-10-01 15:59:01.119458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.786 [2024-10-01 15:59:01.119464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.786 [2024-10-01 15:59:01.119607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.786 [2024-10-01 15:59:01.119617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.786 [2024-10-01 15:59:01.131149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.787 [2024-10-01 15:59:01.131171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.787 [2024-10-01 15:59:01.131356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.787 [2024-10-01 15:59:01.131368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.787 [2024-10-01 15:59:01.131376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.787 [2024-10-01 15:59:01.131507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.787 [2024-10-01 15:59:01.131517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.787 [2024-10-01 15:59:01.131523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.787 [2024-10-01 15:59:01.131535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.787 [2024-10-01 15:59:01.131544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.787 [2024-10-01 15:59:01.131558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.787 [2024-10-01 15:59:01.131564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.787 [2024-10-01 15:59:01.131570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.787 [2024-10-01 15:59:01.131578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.787 [2024-10-01 15:59:01.131584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.787 [2024-10-01 15:59:01.131590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.787 [2024-10-01 15:59:01.131603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.787 [2024-10-01 15:59:01.131610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.787 [2024-10-01 15:59:01.141514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.787 [2024-10-01 15:59:01.141534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.787 [2024-10-01 15:59:01.141762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.787 [2024-10-01 15:59:01.141775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.787 [2024-10-01 15:59:01.141782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.787 [2024-10-01 15:59:01.141944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.787 [2024-10-01 15:59:01.141954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.787 [2024-10-01 15:59:01.141961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.787 [2024-10-01 15:59:01.142041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.787 [2024-10-01 15:59:01.142051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.787 [2024-10-01 15:59:01.144306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.787 [2024-10-01 15:59:01.144323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.787 [2024-10-01 15:59:01.144330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.787 [2024-10-01 15:59:01.144339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.787 [2024-10-01 15:59:01.144345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.787 [2024-10-01 15:59:01.144351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.787 [2024-10-01 15:59:01.144829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.787 [2024-10-01 15:59:01.144841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.787 [2024-10-01 15:59:01.153974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.787 [2024-10-01 15:59:01.153995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.787 [2024-10-01 15:59:01.155979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.787 [2024-10-01 15:59:01.156000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.787 [2024-10-01 15:59:01.156011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.787 [2024-10-01 15:59:01.156236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.787 [2024-10-01 15:59:01.156246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.787 [2024-10-01 15:59:01.156252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.787 [2024-10-01 15:59:01.157164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.787 [2024-10-01 15:59:01.157181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.787 [2024-10-01 15:59:01.157714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.787 [2024-10-01 15:59:01.157725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.787 [2024-10-01 15:59:01.157731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.787 [2024-10-01 15:59:01.157740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.787 [2024-10-01 15:59:01.157747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.787 [2024-10-01 15:59:01.157753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.787 [2024-10-01 15:59:01.157810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.787 [2024-10-01 15:59:01.157818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.787 [2024-10-01 15:59:01.167179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.787 [2024-10-01 15:59:01.167200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.787 [2024-10-01 15:59:01.167670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.787 [2024-10-01 15:59:01.167687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.787 [2024-10-01 15:59:01.167695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.787 [2024-10-01 15:59:01.167910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.787 [2024-10-01 15:59:01.167921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.787 [2024-10-01 15:59:01.167928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.787 [2024-10-01 15:59:01.168386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.787 [2024-10-01 15:59:01.168401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.787 [2024-10-01 15:59:01.168670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.787 [2024-10-01 15:59:01.168680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.787 [2024-10-01 15:59:01.168686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.787 [2024-10-01 15:59:01.168696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.787 [2024-10-01 15:59:01.168702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.787 [2024-10-01 15:59:01.168708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.787 [2024-10-01 15:59:01.168751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.787 [2024-10-01 15:59:01.168760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.787 [2024-10-01 15:59:01.178214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.787 [2024-10-01 15:59:01.178235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.787 [2024-10-01 15:59:01.178474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.787 [2024-10-01 15:59:01.178487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.787 [2024-10-01 15:59:01.178494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.787 [2024-10-01 15:59:01.178686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.787 [2024-10-01 15:59:01.178697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.787 [2024-10-01 15:59:01.178704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.787 [2024-10-01 15:59:01.178715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.787 [2024-10-01 15:59:01.178724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.787 [2024-10-01 15:59:01.178734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.788 [2024-10-01 15:59:01.178740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.788 [2024-10-01 15:59:01.178747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.788 [2024-10-01 15:59:01.178755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.788 [2024-10-01 15:59:01.178760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.788 [2024-10-01 15:59:01.178767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.788 [2024-10-01 15:59:01.178780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.788 [2024-10-01 15:59:01.178787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.788 [2024-10-01 15:59:01.189397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.788 [2024-10-01 15:59:01.189418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.788 [2024-10-01 15:59:01.189711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.788 [2024-10-01 15:59:01.189727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.788 [2024-10-01 15:59:01.189734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.788 [2024-10-01 15:59:01.189926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.788 [2024-10-01 15:59:01.189937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.788 [2024-10-01 15:59:01.189944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.788 [2024-10-01 15:59:01.190089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.788 [2024-10-01 15:59:01.190101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.788 [2024-10-01 15:59:01.190238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.788 [2024-10-01 15:59:01.190252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.788 [2024-10-01 15:59:01.190259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.788 [2024-10-01 15:59:01.190268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.788 [2024-10-01 15:59:01.190275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.788 [2024-10-01 15:59:01.190280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.788 [2024-10-01 15:59:01.190310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.788 [2024-10-01 15:59:01.190317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.788 [2024-10-01 15:59:01.200040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.788 [2024-10-01 15:59:01.200061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.788 [2024-10-01 15:59:01.200221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.788 [2024-10-01 15:59:01.200233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.788 [2024-10-01 15:59:01.200241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.788 [2024-10-01 15:59:01.200316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.788 [2024-10-01 15:59:01.200325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.788 [2024-10-01 15:59:01.200332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.788 [2024-10-01 15:59:01.200344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.788 [2024-10-01 15:59:01.200353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.788 [2024-10-01 15:59:01.200362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.788 [2024-10-01 15:59:01.200369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.788 [2024-10-01 15:59:01.200375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.788 [2024-10-01 15:59:01.200384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.788 [2024-10-01 15:59:01.200390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.788 [2024-10-01 15:59:01.200396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.788 [2024-10-01 15:59:01.200409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.788 [2024-10-01 15:59:01.200415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.788 [2024-10-01 15:59:01.212585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.788 [2024-10-01 15:59:01.212606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.788 [2024-10-01 15:59:01.212792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.788 [2024-10-01 15:59:01.212804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.788 [2024-10-01 15:59:01.212812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.788 [2024-10-01 15:59:01.213048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.788 [2024-10-01 15:59:01.213060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.788 [2024-10-01 15:59:01.213066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.788 [2024-10-01 15:59:01.213078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.788 [2024-10-01 15:59:01.213087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.788 [2024-10-01 15:59:01.213112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.788 [2024-10-01 15:59:01.213119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.788 [2024-10-01 15:59:01.213125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.788 [2024-10-01 15:59:01.213135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.788 [2024-10-01 15:59:01.213140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.788 [2024-10-01 15:59:01.213146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.788 [2024-10-01 15:59:01.213160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.788 [2024-10-01 15:59:01.213166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.788 [2024-10-01 15:59:01.224513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.788 [2024-10-01 15:59:01.224534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.788 [2024-10-01 15:59:01.224879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.788 [2024-10-01 15:59:01.224896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.788 [2024-10-01 15:59:01.224904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.788 [2024-10-01 15:59:01.225061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.788 [2024-10-01 15:59:01.225075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.788 [2024-10-01 15:59:01.225082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.788 [2024-10-01 15:59:01.225227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.788 [2024-10-01 15:59:01.225239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.788 [2024-10-01 15:59:01.225387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.788 [2024-10-01 15:59:01.225398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.789 [2024-10-01 15:59:01.225404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.789 [2024-10-01 15:59:01.225414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.789 [2024-10-01 15:59:01.225420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.789 [2024-10-01 15:59:01.225426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.789 [2024-10-01 15:59:01.225456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.789 [2024-10-01 15:59:01.225467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.789 [2024-10-01 15:59:01.235593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.789 [2024-10-01 15:59:01.235615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.789 [2024-10-01 15:59:01.235905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.789 [2024-10-01 15:59:01.235922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.789 [2024-10-01 15:59:01.235929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.789 [2024-10-01 15:59:01.236064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.789 [2024-10-01 15:59:01.236073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.789 [2024-10-01 15:59:01.236080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.789 [2024-10-01 15:59:01.236223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.789 [2024-10-01 15:59:01.236236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.789 [2024-10-01 15:59:01.236373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.789 [2024-10-01 15:59:01.236383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.789 [2024-10-01 15:59:01.236390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.789 [2024-10-01 15:59:01.236399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.789 [2024-10-01 15:59:01.236405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.789 [2024-10-01 15:59:01.236411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.789 [2024-10-01 15:59:01.236441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.789 [2024-10-01 15:59:01.236448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.789 [2024-10-01 15:59:01.246647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.789 [2024-10-01 15:59:01.246667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.789 [2024-10-01 15:59:01.246877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.789 [2024-10-01 15:59:01.246890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.789 [2024-10-01 15:59:01.246898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.789 [2024-10-01 15:59:01.246983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.789 [2024-10-01 15:59:01.246992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.789 [2024-10-01 15:59:01.246999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.789 [2024-10-01 15:59:01.247011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.789 [2024-10-01 15:59:01.247020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.789 [2024-10-01 15:59:01.247030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.789 [2024-10-01 15:59:01.247036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.789 [2024-10-01 15:59:01.247045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.789 [2024-10-01 15:59:01.247055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.789 [2024-10-01 15:59:01.247061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.789 [2024-10-01 15:59:01.247067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.789 [2024-10-01 15:59:01.247081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.789 [2024-10-01 15:59:01.247087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.789 [2024-10-01 15:59:01.258840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.789 [2024-10-01 15:59:01.258861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.789 [2024-10-01 15:59:01.259271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.789 [2024-10-01 15:59:01.259288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.789 [2024-10-01 15:59:01.259296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.789 [2024-10-01 15:59:01.259429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.789 [2024-10-01 15:59:01.259439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.789 [2024-10-01 15:59:01.259445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.789 [2024-10-01 15:59:01.260036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.789 [2024-10-01 15:59:01.260052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.789 [2024-10-01 15:59:01.260333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.789 [2024-10-01 15:59:01.260344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.789 [2024-10-01 15:59:01.260351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.789 [2024-10-01 15:59:01.260361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.789 [2024-10-01 15:59:01.260366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.789 [2024-10-01 15:59:01.260373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.789 [2024-10-01 15:59:01.260525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.789 [2024-10-01 15:59:01.260534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.789 [2024-10-01 15:59:01.269041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.789 [2024-10-01 15:59:01.269062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.789 [2024-10-01 15:59:01.269231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.789 [2024-10-01 15:59:01.269243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.789 [2024-10-01 15:59:01.269250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.789 [2024-10-01 15:59:01.269443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.789 [2024-10-01 15:59:01.269452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.789 [2024-10-01 15:59:01.269463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.789 [2024-10-01 15:59:01.269714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.789 [2024-10-01 15:59:01.269727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.789 [2024-10-01 15:59:01.270402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.789 [2024-10-01 15:59:01.270415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.789 [2024-10-01 15:59:01.270422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.790 [2024-10-01 15:59:01.270431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.790 [2024-10-01 15:59:01.270437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.790 [2024-10-01 15:59:01.270444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.790 [2024-10-01 15:59:01.270962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.790 [2024-10-01 15:59:01.270979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.790 [2024-10-01 15:59:01.279679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.790 [2024-10-01 15:59:01.279700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.790 [2024-10-01 15:59:01.279849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.790 [2024-10-01 15:59:01.279867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.790 [2024-10-01 15:59:01.279875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.790 [2024-10-01 15:59:01.279946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.790 [2024-10-01 15:59:01.279955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.790 [2024-10-01 15:59:01.279962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.790 [2024-10-01 15:59:01.280082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.790 [2024-10-01 15:59:01.280095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.790 [2024-10-01 15:59:01.280186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.790 [2024-10-01 15:59:01.280196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.790 [2024-10-01 15:59:01.280202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.790 [2024-10-01 15:59:01.280212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.790 [2024-10-01 15:59:01.280217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.790 [2024-10-01 15:59:01.280224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.790 [2024-10-01 15:59:01.280251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.790 [2024-10-01 15:59:01.280258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.790 [2024-10-01 15:59:01.290582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.790 [2024-10-01 15:59:01.290607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.790 [2024-10-01 15:59:01.290914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.790 [2024-10-01 15:59:01.290930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.790 [2024-10-01 15:59:01.290938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.790 [2024-10-01 15:59:01.291081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.790 [2024-10-01 15:59:01.291091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.790 [2024-10-01 15:59:01.291098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.790 [2024-10-01 15:59:01.291242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.790 [2024-10-01 15:59:01.291254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.790 [2024-10-01 15:59:01.291392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.790 [2024-10-01 15:59:01.291402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.790 [2024-10-01 15:59:01.291408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.790 [2024-10-01 15:59:01.291418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.790 [2024-10-01 15:59:01.291424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.790 [2024-10-01 15:59:01.291430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.790 [2024-10-01 15:59:01.291456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.790 [2024-10-01 15:59:01.291463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.790 [2024-10-01 15:59:01.302146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.790 [2024-10-01 15:59:01.302167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.790 [2024-10-01 15:59:01.302380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.790 [2024-10-01 15:59:01.302393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.790 [2024-10-01 15:59:01.302400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.790 [2024-10-01 15:59:01.302490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.790 [2024-10-01 15:59:01.302499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.790 [2024-10-01 15:59:01.302506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.790 [2024-10-01 15:59:01.302636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.790 [2024-10-01 15:59:01.302647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.790 [2024-10-01 15:59:01.302785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.790 [2024-10-01 15:59:01.302795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.790 [2024-10-01 15:59:01.302801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.790 [2024-10-01 15:59:01.302814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.790 [2024-10-01 15:59:01.302820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.790 [2024-10-01 15:59:01.302826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.790 [2024-10-01 15:59:01.302856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.790 [2024-10-01 15:59:01.302870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.790 [2024-10-01 15:59:01.314388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.790 [2024-10-01 15:59:01.314409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.790 [2024-10-01 15:59:01.314783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.790 [2024-10-01 15:59:01.314799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.790 [2024-10-01 15:59:01.314806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.790 [2024-10-01 15:59:01.314913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.790 [2024-10-01 15:59:01.314924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.790 [2024-10-01 15:59:01.314931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.790 [2024-10-01 15:59:01.315091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.790 [2024-10-01 15:59:01.315104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.790 [2024-10-01 15:59:01.315130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.790 [2024-10-01 15:59:01.315138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.790 [2024-10-01 15:59:01.315144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.790 [2024-10-01 15:59:01.315153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.790 [2024-10-01 15:59:01.315159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.790 [2024-10-01 15:59:01.315165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.790 [2024-10-01 15:59:01.315180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.791 [2024-10-01 15:59:01.315186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.791 [2024-10-01 15:59:01.324695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.791 [2024-10-01 15:59:01.324717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.791 [2024-10-01 15:59:01.324902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.791 [2024-10-01 15:59:01.324916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.791 [2024-10-01 15:59:01.324924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.791 [2024-10-01 15:59:01.325070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.791 [2024-10-01 15:59:01.325080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.791 [2024-10-01 15:59:01.325087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.791 [2024-10-01 15:59:01.325102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.791 [2024-10-01 15:59:01.325112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.791 [2024-10-01 15:59:01.325122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.791 [2024-10-01 15:59:01.325128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.791 [2024-10-01 15:59:01.325134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.791 [2024-10-01 15:59:01.325142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.791 [2024-10-01 15:59:01.325148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.791 [2024-10-01 15:59:01.325154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.791 [2024-10-01 15:59:01.325167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.791 [2024-10-01 15:59:01.325174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.791 [2024-10-01 15:59:01.336714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.791 [2024-10-01 15:59:01.336736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.791 [2024-10-01 15:59:01.336949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.791 [2024-10-01 15:59:01.336962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.791 [2024-10-01 15:59:01.336970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.791 [2024-10-01 15:59:01.337114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.791 [2024-10-01 15:59:01.337124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.791 [2024-10-01 15:59:01.337130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.791 [2024-10-01 15:59:01.337927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.791 [2024-10-01 15:59:01.337942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.791 [2024-10-01 15:59:01.338418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.791 [2024-10-01 15:59:01.338430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.791 [2024-10-01 15:59:01.338436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.791 [2024-10-01 15:59:01.338446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.791 [2024-10-01 15:59:01.338452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.791 [2024-10-01 15:59:01.338458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.791 [2024-10-01 15:59:01.338757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.791 [2024-10-01 15:59:01.338768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.791 [2024-10-01 15:59:01.348621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.791 [2024-10-01 15:59:01.348643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.791 [2024-10-01 15:59:01.348949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.791 [2024-10-01 15:59:01.348966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.791 [2024-10-01 15:59:01.348974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.791 [2024-10-01 15:59:01.349166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.791 [2024-10-01 15:59:01.349177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.791 [2024-10-01 15:59:01.349184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.791 [2024-10-01 15:59:01.349327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.791 [2024-10-01 15:59:01.349340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.791 [2024-10-01 15:59:01.349477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.791 [2024-10-01 15:59:01.349487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.791 [2024-10-01 15:59:01.349494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.791 [2024-10-01 15:59:01.349504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.791 [2024-10-01 15:59:01.349510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.791 [2024-10-01 15:59:01.349516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.791 [2024-10-01 15:59:01.349545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.791 [2024-10-01 15:59:01.349553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.791 [2024-10-01 15:59:01.360354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.791 [2024-10-01 15:59:01.360375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.791 [2024-10-01 15:59:01.360521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.791 [2024-10-01 15:59:01.360533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.791 [2024-10-01 15:59:01.360540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.791 [2024-10-01 15:59:01.360667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.791 [2024-10-01 15:59:01.360677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.791 [2024-10-01 15:59:01.360684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.791 [2024-10-01 15:59:01.360696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.791 [2024-10-01 15:59:01.360705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.791 [2024-10-01 15:59:01.360714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.791 [2024-10-01 15:59:01.360720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.791 [2024-10-01 15:59:01.360727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.791 [2024-10-01 15:59:01.360735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.791 [2024-10-01 15:59:01.360747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.791 [2024-10-01 15:59:01.360753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.791 [2024-10-01 15:59:01.360766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.791 [2024-10-01 15:59:01.360773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.791 [2024-10-01 15:59:01.372358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.791 [2024-10-01 15:59:01.372381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.791 [2024-10-01 15:59:01.372499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.791 [2024-10-01 15:59:01.372511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.791 [2024-10-01 15:59:01.372518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.791 [2024-10-01 15:59:01.372671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.792 [2024-10-01 15:59:01.372681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.792 [2024-10-01 15:59:01.372688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.792 [2024-10-01 15:59:01.372700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.792 [2024-10-01 15:59:01.372709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.792 [2024-10-01 15:59:01.372719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.792 [2024-10-01 15:59:01.372725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.792 [2024-10-01 15:59:01.372731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.792 [2024-10-01 15:59:01.372740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.792 [2024-10-01 15:59:01.372746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.792 [2024-10-01 15:59:01.372752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.792 [2024-10-01 15:59:01.373545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.792 [2024-10-01 15:59:01.373560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.792 [2024-10-01 15:59:01.384640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.792 [2024-10-01 15:59:01.384662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.792 [2024-10-01 15:59:01.384937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.792 [2024-10-01 15:59:01.384951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.792 [2024-10-01 15:59:01.384959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.792 [2024-10-01 15:59:01.385052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.792 [2024-10-01 15:59:01.385061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.792 [2024-10-01 15:59:01.385068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.792 [2024-10-01 15:59:01.385455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.792 [2024-10-01 15:59:01.385473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.792 [2024-10-01 15:59:01.385519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.792 [2024-10-01 15:59:01.385527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.792 [2024-10-01 15:59:01.385533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.792 [2024-10-01 15:59:01.385542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.792 [2024-10-01 15:59:01.385548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.792 [2024-10-01 15:59:01.385554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.792 [2024-10-01 15:59:01.385568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.792 [2024-10-01 15:59:01.385574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.792 [2024-10-01 15:59:01.394722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.792 [2024-10-01 15:59:01.394752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.792 [2024-10-01 15:59:01.394853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.792 [2024-10-01 15:59:01.394871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.792 [2024-10-01 15:59:01.394878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.792 [2024-10-01 15:59:01.394961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.792 [2024-10-01 15:59:01.394971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.792 [2024-10-01 15:59:01.394978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.792 [2024-10-01 15:59:01.394986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.792 [2024-10-01 15:59:01.394997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.792 [2024-10-01 15:59:01.395005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.792 [2024-10-01 15:59:01.395011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.792 [2024-10-01 15:59:01.395017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.792 [2024-10-01 15:59:01.395030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.792 [2024-10-01 15:59:01.395037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.792 [2024-10-01 15:59:01.395042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.792 [2024-10-01 15:59:01.395048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.792 [2024-10-01 15:59:01.395060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.792 [2024-10-01 15:59:01.406976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.792 [2024-10-01 15:59:01.406996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.792 [2024-10-01 15:59:01.407219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.792 [2024-10-01 15:59:01.407237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.792 [2024-10-01 15:59:01.407245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.792 [2024-10-01 15:59:01.407454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.792 [2024-10-01 15:59:01.407465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.792 [2024-10-01 15:59:01.407472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.792 [2024-10-01 15:59:01.407614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.792 [2024-10-01 15:59:01.407627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.792 [2024-10-01 15:59:01.407652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.792 [2024-10-01 15:59:01.407660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.792 [2024-10-01 15:59:01.407666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.792 [2024-10-01 15:59:01.407675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.792 [2024-10-01 15:59:01.407680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.792 [2024-10-01 15:59:01.407686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.792 [2024-10-01 15:59:01.407700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.792 [2024-10-01 15:59:01.407707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.792 [2024-10-01 15:59:01.419029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.792 [2024-10-01 15:59:01.419050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.792 [2024-10-01 15:59:01.419348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.792 [2024-10-01 15:59:01.419365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.792 [2024-10-01 15:59:01.419372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.792 [2024-10-01 15:59:01.419537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.792 [2024-10-01 15:59:01.419547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.792 [2024-10-01 15:59:01.419554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.792 [2024-10-01 15:59:01.419730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.792 [2024-10-01 15:59:01.419744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.792 [2024-10-01 15:59:01.419770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.793 [2024-10-01 15:59:01.419778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.793 [2024-10-01 15:59:01.419784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.793 [2024-10-01 15:59:01.419793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.793 [2024-10-01 15:59:01.419799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.793 [2024-10-01 15:59:01.419809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.793 [2024-10-01 15:59:01.419823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.793 [2024-10-01 15:59:01.419830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.793 [2024-10-01 15:59:01.429299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.793 [2024-10-01 15:59:01.429319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.793 [2024-10-01 15:59:01.429478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.793 [2024-10-01 15:59:01.429490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.793 [2024-10-01 15:59:01.429498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.793 [2024-10-01 15:59:01.429642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.793 [2024-10-01 15:59:01.429652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.793 [2024-10-01 15:59:01.429659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.793 [2024-10-01 15:59:01.429670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.793 [2024-10-01 15:59:01.429679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.793 [2024-10-01 15:59:01.429689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.793 [2024-10-01 15:59:01.429696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.793 [2024-10-01 15:59:01.429702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.793 [2024-10-01 15:59:01.429711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.793 [2024-10-01 15:59:01.429716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.793 [2024-10-01 15:59:01.429722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.793 [2024-10-01 15:59:01.429735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.793 [2024-10-01 15:59:01.429742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.793 [2024-10-01 15:59:01.442305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.793 [2024-10-01 15:59:01.442327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.793 [2024-10-01 15:59:01.442776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.793 [2024-10-01 15:59:01.442794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.793 [2024-10-01 15:59:01.442801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.793 [2024-10-01 15:59:01.442995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.793 [2024-10-01 15:59:01.443006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.793 [2024-10-01 15:59:01.443013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.793 [2024-10-01 15:59:01.443111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.793 [2024-10-01 15:59:01.443121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.793 [2024-10-01 15:59:01.443944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.793 [2024-10-01 15:59:01.443958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.793 [2024-10-01 15:59:01.443965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.793 [2024-10-01 15:59:01.443975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.793 [2024-10-01 15:59:01.443981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.793 [2024-10-01 15:59:01.443987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.793 [2024-10-01 15:59:01.444415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.793 [2024-10-01 15:59:01.444427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.793 [2024-10-01 15:59:01.452387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.793 [2024-10-01 15:59:01.453212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.793 [2024-10-01 15:59:01.453452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.793 [2024-10-01 15:59:01.453467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.793 [2024-10-01 15:59:01.453475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.793 [2024-10-01 15:59:01.454075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.793 [2024-10-01 15:59:01.454093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.793 [2024-10-01 15:59:01.454100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.793 [2024-10-01 15:59:01.454110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.793 [2024-10-01 15:59:01.454381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.793 [2024-10-01 15:59:01.454393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.793 [2024-10-01 15:59:01.454399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.793 [2024-10-01 15:59:01.454406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.793 [2024-10-01 15:59:01.454447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.793 [2024-10-01 15:59:01.454454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.793 [2024-10-01 15:59:01.454460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.793 [2024-10-01 15:59:01.454466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.793 [2024-10-01 15:59:01.454478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.793 [2024-10-01 15:59:01.463708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.793 [2024-10-01 15:59:01.464799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.793 [2024-10-01 15:59:01.464819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.793 [2024-10-01 15:59:01.464827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.793 [2024-10-01 15:59:01.465255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.793 [2024-10-01 15:59:01.465275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.793 [2024-10-01 15:59:01.465575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.793 [2024-10-01 15:59:01.465590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.793 [2024-10-01 15:59:01.465597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.793 [2024-10-01 15:59:01.465605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.793 [2024-10-01 15:59:01.465610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.793 [2024-10-01 15:59:01.465617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.793 [2024-10-01 15:59:01.465760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.793 [2024-10-01 15:59:01.465772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.794 [2024-10-01 15:59:01.465797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.794 [2024-10-01 15:59:01.465805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.794 [2024-10-01 15:59:01.465811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.794 [2024-10-01 15:59:01.465823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.794 [2024-10-01 15:59:01.475886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.794 [2024-10-01 15:59:01.475907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.794 [2024-10-01 15:59:01.476119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.794 [2024-10-01 15:59:01.476131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.794 [2024-10-01 15:59:01.476139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.794 [2024-10-01 15:59:01.476331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.794 [2024-10-01 15:59:01.476342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.794 [2024-10-01 15:59:01.476349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.794 [2024-10-01 15:59:01.476361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.794 [2024-10-01 15:59:01.476370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.794 [2024-10-01 15:59:01.476388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.794 [2024-10-01 15:59:01.476395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.794 [2024-10-01 15:59:01.476402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.794 [2024-10-01 15:59:01.476410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.794 [2024-10-01 15:59:01.476417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.794 [2024-10-01 15:59:01.476423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.794 [2024-10-01 15:59:01.476440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.794 [2024-10-01 15:59:01.476446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.794 [2024-10-01 15:59:01.488752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.794 [2024-10-01 15:59:01.488774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.794 [2024-10-01 15:59:01.489245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.794 [2024-10-01 15:59:01.489262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.794 [2024-10-01 15:59:01.489270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.794 [2024-10-01 15:59:01.489463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.794 [2024-10-01 15:59:01.489474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.794 [2024-10-01 15:59:01.489481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.794 [2024-10-01 15:59:01.489719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.794 [2024-10-01 15:59:01.489734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.794 [2024-10-01 15:59:01.489870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.794 [2024-10-01 15:59:01.489881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.794 [2024-10-01 15:59:01.489888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.794 [2024-10-01 15:59:01.489897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.794 [2024-10-01 15:59:01.489904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.794 [2024-10-01 15:59:01.489910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.794 [2024-10-01 15:59:01.489939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.794 [2024-10-01 15:59:01.489947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.794 [2024-10-01 15:59:01.499552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.794 [2024-10-01 15:59:01.499573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.794 [2024-10-01 15:59:01.499812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.794 [2024-10-01 15:59:01.499825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.794 [2024-10-01 15:59:01.499833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.794 [2024-10-01 15:59:01.500025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.794 [2024-10-01 15:59:01.500037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.794 [2024-10-01 15:59:01.500044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.794 [2024-10-01 15:59:01.500490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.794 [2024-10-01 15:59:01.500504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.794 [2024-10-01 15:59:01.500673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.794 [2024-10-01 15:59:01.500687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.794 [2024-10-01 15:59:01.500694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.794 [2024-10-01 15:59:01.500703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.794 [2024-10-01 15:59:01.500709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.794 [2024-10-01 15:59:01.500715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.794 [2024-10-01 15:59:01.500895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.794 [2024-10-01 15:59:01.500905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.794 [2024-10-01 15:59:01.511584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.794 [2024-10-01 15:59:01.511606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.794 [2024-10-01 15:59:01.511955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.794 [2024-10-01 15:59:01.511972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.794 [2024-10-01 15:59:01.511980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.794 [2024-10-01 15:59:01.512108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.794 [2024-10-01 15:59:01.512117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.794 [2024-10-01 15:59:01.512124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.794 [2024-10-01 15:59:01.512310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.794 [2024-10-01 15:59:01.512323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.794 [2024-10-01 15:59:01.512471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.794 [2024-10-01 15:59:01.512481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.795 [2024-10-01 15:59:01.512487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.795 [2024-10-01 15:59:01.512497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.795 [2024-10-01 15:59:01.512503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.795 [2024-10-01 15:59:01.512509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.795 [2024-10-01 15:59:01.512539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.795 [2024-10-01 15:59:01.512547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.795 [2024-10-01 15:59:01.523133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.795 [2024-10-01 15:59:01.523155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.795 [2024-10-01 15:59:01.523558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.795 [2024-10-01 15:59:01.523574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.795 [2024-10-01 15:59:01.523582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.795 [2024-10-01 15:59:01.523803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.795 [2024-10-01 15:59:01.523814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.795 [2024-10-01 15:59:01.523821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.795 [2024-10-01 15:59:01.524081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.795 [2024-10-01 15:59:01.524094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.795 [2024-10-01 15:59:01.524242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.795 [2024-10-01 15:59:01.524251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.795 [2024-10-01 15:59:01.524258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.795 [2024-10-01 15:59:01.524267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.795 [2024-10-01 15:59:01.524273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.795 [2024-10-01 15:59:01.524280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.795 [2024-10-01 15:59:01.524309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.795 [2024-10-01 15:59:01.524317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.795 [2024-10-01 15:59:01.534622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.795 [2024-10-01 15:59:01.534643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.795 [2024-10-01 15:59:01.535031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.795 [2024-10-01 15:59:01.535048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.795 [2024-10-01 15:59:01.535056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.795 [2024-10-01 15:59:01.535299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.795 [2024-10-01 15:59:01.535309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.795 [2024-10-01 15:59:01.535316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.795 [2024-10-01 15:59:01.535579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.795 [2024-10-01 15:59:01.535593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.795 [2024-10-01 15:59:01.535741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.795 [2024-10-01 15:59:01.535751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.795 [2024-10-01 15:59:01.535758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.795 [2024-10-01 15:59:01.535768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.795 [2024-10-01 15:59:01.535774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.795 [2024-10-01 15:59:01.535780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.795 [2024-10-01 15:59:01.535810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.795 [2024-10-01 15:59:01.535821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.795 [2024-10-01 15:59:01.545925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.795 [2024-10-01 15:59:01.545946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.795 [2024-10-01 15:59:01.546114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.795 [2024-10-01 15:59:01.546126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.795 [2024-10-01 15:59:01.546134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.795 [2024-10-01 15:59:01.546275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.795 [2024-10-01 15:59:01.546284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.795 [2024-10-01 15:59:01.546292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.795 [2024-10-01 15:59:01.546303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.795 [2024-10-01 15:59:01.546313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.795 [2024-10-01 15:59:01.546322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.795 [2024-10-01 15:59:01.546328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.795 [2024-10-01 15:59:01.546335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.795 [2024-10-01 15:59:01.546343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.795 [2024-10-01 15:59:01.546349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.795 [2024-10-01 15:59:01.546355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.795 [2024-10-01 15:59:01.546369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.795 [2024-10-01 15:59:01.546375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.795 [2024-10-01 15:59:01.557666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.795 [2024-10-01 15:59:01.557688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.795 [2024-10-01 15:59:01.557788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.795 [2024-10-01 15:59:01.557801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.795 [2024-10-01 15:59:01.557808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.795 [2024-10-01 15:59:01.558001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.795 [2024-10-01 15:59:01.558012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.795 [2024-10-01 15:59:01.558019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.795 [2024-10-01 15:59:01.558031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.795 [2024-10-01 15:59:01.558040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.795 [2024-10-01 15:59:01.558050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.795 [2024-10-01 15:59:01.558056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.795 [2024-10-01 15:59:01.558066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.796 [2024-10-01 15:59:01.558075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.796 [2024-10-01 15:59:01.558080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.796 [2024-10-01 15:59:01.558086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.796 [2024-10-01 15:59:01.558100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.796 [2024-10-01 15:59:01.558106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.796 [2024-10-01 15:59:01.568795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.796 [2024-10-01 15:59:01.568817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.796 [2024-10-01 15:59:01.569288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.796 [2024-10-01 15:59:01.569306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.796 [2024-10-01 15:59:01.569313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.796 [2024-10-01 15:59:01.569460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.796 [2024-10-01 15:59:01.569470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.796 [2024-10-01 15:59:01.569476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.796 [2024-10-01 15:59:01.569734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.796 [2024-10-01 15:59:01.569747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.796 [2024-10-01 15:59:01.569784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.796 [2024-10-01 15:59:01.569791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.796 [2024-10-01 15:59:01.569798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.796 [2024-10-01 15:59:01.569807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.796 [2024-10-01 15:59:01.569813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.796 [2024-10-01 15:59:01.569820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.796 [2024-10-01 15:59:01.569953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.796 [2024-10-01 15:59:01.569963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.796 [2024-10-01 15:59:01.579754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.796 [2024-10-01 15:59:01.579775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.796 [2024-10-01 15:59:01.579972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.796 [2024-10-01 15:59:01.579987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.796 [2024-10-01 15:59:01.579994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.796 [2024-10-01 15:59:01.580152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.796 [2024-10-01 15:59:01.580165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.796 [2024-10-01 15:59:01.580173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.796 [2024-10-01 15:59:01.580304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.796 [2024-10-01 15:59:01.580315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.796 [2024-10-01 15:59:01.580454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.796 [2024-10-01 15:59:01.580465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.796 [2024-10-01 15:59:01.580472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.796 [2024-10-01 15:59:01.580481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.796 [2024-10-01 15:59:01.580487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.796 [2024-10-01 15:59:01.580494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.796 [2024-10-01 15:59:01.580523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.796 [2024-10-01 15:59:01.580531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.796 [2024-10-01 15:59:01.591609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.796 [2024-10-01 15:59:01.591629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.796 [2024-10-01 15:59:01.591784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.796 [2024-10-01 15:59:01.591797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.796 [2024-10-01 15:59:01.591804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.796 [2024-10-01 15:59:01.591898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.796 [2024-10-01 15:59:01.591908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.796 [2024-10-01 15:59:01.591915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.796 [2024-10-01 15:59:01.591927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.796 [2024-10-01 15:59:01.591935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.796 [2024-10-01 15:59:01.591945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.796 [2024-10-01 15:59:01.591951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.796 [2024-10-01 15:59:01.591958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.796 [2024-10-01 15:59:01.591966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.796 [2024-10-01 15:59:01.591972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.796 [2024-10-01 15:59:01.591978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.796 [2024-10-01 15:59:01.591991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.796 [2024-10-01 15:59:01.591998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.796 [2024-10-01 15:59:01.603091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.796 [2024-10-01 15:59:01.603116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.796 [2024-10-01 15:59:01.603460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.796 [2024-10-01 15:59:01.603477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.796 [2024-10-01 15:59:01.603484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.796 [2024-10-01 15:59:01.603681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.796 [2024-10-01 15:59:01.603692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.796 [2024-10-01 15:59:01.603699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.796 [2024-10-01 15:59:01.603903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.796 [2024-10-01 15:59:01.603918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.796 [2024-10-01 15:59:01.603944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.796 [2024-10-01 15:59:01.603952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.796 [2024-10-01 15:59:01.603958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.797 [2024-10-01 15:59:01.603967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.797 [2024-10-01 15:59:01.603973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.797 [2024-10-01 15:59:01.603979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.797 [2024-10-01 15:59:01.604107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.797 [2024-10-01 15:59:01.604116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.797 [2024-10-01 15:59:01.614910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.797 [2024-10-01 15:59:01.614931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.797 [2024-10-01 15:59:01.615236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.797 [2024-10-01 15:59:01.615252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.797 [2024-10-01 15:59:01.615260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.797 [2024-10-01 15:59:01.615337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.797 [2024-10-01 15:59:01.615347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.797 [2024-10-01 15:59:01.615354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.797 [2024-10-01 15:59:01.615529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.797 [2024-10-01 15:59:01.615541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.797 [2024-10-01 15:59:01.615682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.797 [2024-10-01 15:59:01.615692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.797 [2024-10-01 15:59:01.615698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.797 [2024-10-01 15:59:01.615711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.797 [2024-10-01 15:59:01.615717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.797 [2024-10-01 15:59:01.615723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.797 [2024-10-01 15:59:01.615754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.797 [2024-10-01 15:59:01.615761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.797 [2024-10-01 15:59:01.625494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.797 [2024-10-01 15:59:01.625515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.797 [2024-10-01 15:59:01.625725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.797 [2024-10-01 15:59:01.625738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.797 [2024-10-01 15:59:01.625745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.797 [2024-10-01 15:59:01.625962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.797 [2024-10-01 15:59:01.625974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.797 [2024-10-01 15:59:01.625981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.797 [2024-10-01 15:59:01.625993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.797 [2024-10-01 15:59:01.626002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.797 [2024-10-01 15:59:01.626021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.797 [2024-10-01 15:59:01.626028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.797 [2024-10-01 15:59:01.626034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.797 [2024-10-01 15:59:01.626043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.797 [2024-10-01 15:59:01.626049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.797 [2024-10-01 15:59:01.626055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.797 [2024-10-01 15:59:01.626068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.797 [2024-10-01 15:59:01.626075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.797 [2024-10-01 15:59:01.636687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.797 [2024-10-01 15:59:01.636709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.797 [2024-10-01 15:59:01.636869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.797 [2024-10-01 15:59:01.636882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.797 [2024-10-01 15:59:01.636890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.797 [2024-10-01 15:59:01.637083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.797 [2024-10-01 15:59:01.637092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.797 [2024-10-01 15:59:01.637103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.797 [2024-10-01 15:59:01.637114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.797 [2024-10-01 15:59:01.637123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.797 [2024-10-01 15:59:01.637133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.797 [2024-10-01 15:59:01.637139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.797 [2024-10-01 15:59:01.637145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.797 [2024-10-01 15:59:01.637154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.797 [2024-10-01 15:59:01.637160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.797 [2024-10-01 15:59:01.637166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.797 [2024-10-01 15:59:01.637179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.797 [2024-10-01 15:59:01.637186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.797 [2024-10-01 15:59:01.647114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.797 [2024-10-01 15:59:01.647134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.797 [2024-10-01 15:59:01.647293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.797 [2024-10-01 15:59:01.647305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.797 [2024-10-01 15:59:01.647313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.797 [2024-10-01 15:59:01.647483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.797 [2024-10-01 15:59:01.647493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.797 [2024-10-01 15:59:01.647500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.797 [2024-10-01 15:59:01.647511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.797 [2024-10-01 15:59:01.647521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.797 [2024-10-01 15:59:01.647530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.797 [2024-10-01 15:59:01.647536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.797 [2024-10-01 15:59:01.647543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.797 [2024-10-01 15:59:01.647551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.797 [2024-10-01 15:59:01.647557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.797 [2024-10-01 15:59:01.647563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.797 [2024-10-01 15:59:01.647576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.797 [2024-10-01 15:59:01.647583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.797 [2024-10-01 15:59:01.658307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.798 [2024-10-01 15:59:01.658329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.798 [2024-10-01 15:59:01.658501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.798 [2024-10-01 15:59:01.658515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.798 [2024-10-01 15:59:01.658523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.798 [2024-10-01 15:59:01.658720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.798 [2024-10-01 15:59:01.658730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.798 [2024-10-01 15:59:01.658738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.798 [2024-10-01 15:59:01.658750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.798 [2024-10-01 15:59:01.658759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.798 [2024-10-01 15:59:01.658768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.798 [2024-10-01 15:59:01.658774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.798 [2024-10-01 15:59:01.658780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.798 [2024-10-01 15:59:01.658791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.798 [2024-10-01 15:59:01.658797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.798 [2024-10-01 15:59:01.658804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.798 [2024-10-01 15:59:01.658817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.798 [2024-10-01 15:59:01.658824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.798 [2024-10-01 15:59:01.668729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.798 [2024-10-01 15:59:01.668749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.798 [2024-10-01 15:59:01.669135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.798 [2024-10-01 15:59:01.669152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.798 [2024-10-01 15:59:01.669160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.798 [2024-10-01 15:59:01.669299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.798 [2024-10-01 15:59:01.669309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.798 [2024-10-01 15:59:01.669315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.798 [2024-10-01 15:59:01.669470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.798 [2024-10-01 15:59:01.669482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.798 [2024-10-01 15:59:01.669826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.798 [2024-10-01 15:59:01.669837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.798 [2024-10-01 15:59:01.669844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.798 [2024-10-01 15:59:01.669853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.798 [2024-10-01 15:59:01.669868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.798 [2024-10-01 15:59:01.669874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.798 [2024-10-01 15:59:01.670030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.798 [2024-10-01 15:59:01.670040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.798 [2024-10-01 15:59:01.681021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.798 [2024-10-01 15:59:01.681043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.798 [2024-10-01 15:59:01.681371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.798 [2024-10-01 15:59:01.681388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.798 [2024-10-01 15:59:01.681395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.798 [2024-10-01 15:59:01.681476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.798 [2024-10-01 15:59:01.681485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.798 [2024-10-01 15:59:01.681492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.798 [2024-10-01 15:59:01.681635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.798 [2024-10-01 15:59:01.681647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.798 [2024-10-01 15:59:01.681784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.798 [2024-10-01 15:59:01.681795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.798 [2024-10-01 15:59:01.681803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.798 [2024-10-01 15:59:01.681813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.798 [2024-10-01 15:59:01.681820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.798 [2024-10-01 15:59:01.681827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.798 [2024-10-01 15:59:01.681857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.798 [2024-10-01 15:59:01.681872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.798 [2024-10-01 15:59:01.691609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.798 [2024-10-01 15:59:01.691630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.798 [2024-10-01 15:59:01.691795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.798 [2024-10-01 15:59:01.691808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.798 [2024-10-01 15:59:01.691816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.798 [2024-10-01 15:59:01.691963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.798 [2024-10-01 15:59:01.691973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.798 [2024-10-01 15:59:01.691980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.798 [2024-10-01 15:59:01.691994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.798 [2024-10-01 15:59:01.692003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.798 [2024-10-01 15:59:01.692013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.798 [2024-10-01 15:59:01.692019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.798 [2024-10-01 15:59:01.692025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.798 [2024-10-01 15:59:01.692033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.798 [2024-10-01 15:59:01.692039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.798 [2024-10-01 15:59:01.692045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.798 [2024-10-01 15:59:01.692057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.798 [2024-10-01 15:59:01.692064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.798 [2024-10-01 15:59:01.702853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.798 [2024-10-01 15:59:01.702883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.798 [2024-10-01 15:59:01.703043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.798 [2024-10-01 15:59:01.703055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.798 [2024-10-01 15:59:01.703063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.798 [2024-10-01 15:59:01.703259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.798 [2024-10-01 15:59:01.703268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.798 [2024-10-01 15:59:01.703275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.798 [2024-10-01 15:59:01.703286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.798 [2024-10-01 15:59:01.703296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.798 [2024-10-01 15:59:01.703306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.798 [2024-10-01 15:59:01.703312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.798 [2024-10-01 15:59:01.703319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.798 [2024-10-01 15:59:01.703327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.799 [2024-10-01 15:59:01.703333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.799 [2024-10-01 15:59:01.703339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.799 [2024-10-01 15:59:01.703353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.799 [2024-10-01 15:59:01.703360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.799 [2024-10-01 15:59:01.714307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.799 [2024-10-01 15:59:01.714330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.799 [2024-10-01 15:59:01.714617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.799 [2024-10-01 15:59:01.714637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.799 [2024-10-01 15:59:01.714645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.799 [2024-10-01 15:59:01.714820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.799 [2024-10-01 15:59:01.714830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.799 [2024-10-01 15:59:01.714837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.799 [2024-10-01 15:59:01.714871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.799 [2024-10-01 15:59:01.714882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.799 [2024-10-01 15:59:01.714891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.799 [2024-10-01 15:59:01.714897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.799 [2024-10-01 15:59:01.714904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.799 [2024-10-01 15:59:01.714913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.799 [2024-10-01 15:59:01.714918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.799 [2024-10-01 15:59:01.714924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.799 [2024-10-01 15:59:01.715108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.799 [2024-10-01 15:59:01.715118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.799 [2024-10-01 15:59:01.725181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.799 [2024-10-01 15:59:01.725202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.799 [2024-10-01 15:59:01.725642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.799 [2024-10-01 15:59:01.725659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.799 [2024-10-01 15:59:01.725666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.799 [2024-10-01 15:59:01.725751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.799 [2024-10-01 15:59:01.725761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.799 [2024-10-01 15:59:01.725768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.799 [2024-10-01 15:59:01.725931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.799 [2024-10-01 15:59:01.725944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.799 [2024-10-01 15:59:01.726083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.799 [2024-10-01 15:59:01.726092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.799 [2024-10-01 15:59:01.726099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.799 [2024-10-01 15:59:01.726108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.799 [2024-10-01 15:59:01.726114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.799 [2024-10-01 15:59:01.726124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.799 [2024-10-01 15:59:01.726154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.799 [2024-10-01 15:59:01.726162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.799 [2024-10-01 15:59:01.736105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.799 [2024-10-01 15:59:01.736126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.799 [2024-10-01 15:59:01.736244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.799 [2024-10-01 15:59:01.736257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.799 [2024-10-01 15:59:01.736264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.799 [2024-10-01 15:59:01.736414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.799 [2024-10-01 15:59:01.736423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.799 [2024-10-01 15:59:01.736430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.799 [2024-10-01 15:59:01.736837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.799 [2024-10-01 15:59:01.736851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.799 [2024-10-01 15:59:01.737114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.799 [2024-10-01 15:59:01.737124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.799 [2024-10-01 15:59:01.737130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.799 [2024-10-01 15:59:01.737140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.799 [2024-10-01 15:59:01.737146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.799 [2024-10-01 15:59:01.737152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.799 [2024-10-01 15:59:01.737541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.799 [2024-10-01 15:59:01.737553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.799 [2024-10-01 15:59:01.748144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.799 [2024-10-01 15:59:01.748166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.799 [2024-10-01 15:59:01.748429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.799 [2024-10-01 15:59:01.748445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.799 [2024-10-01 15:59:01.748453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.799 [2024-10-01 15:59:01.748541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.799 [2024-10-01 15:59:01.748551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.799 [2024-10-01 15:59:01.748557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.799 [2024-10-01 15:59:01.748760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.799 [2024-10-01 15:59:01.748776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.799 [2024-10-01 15:59:01.748926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.799 [2024-10-01 15:59:01.748937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.799 [2024-10-01 15:59:01.748943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.799 [2024-10-01 15:59:01.748952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.799 [2024-10-01 15:59:01.748958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.800 [2024-10-01 15:59:01.748964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.800 [2024-10-01 15:59:01.748995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.800 [2024-10-01 15:59:01.749003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.800 [2024-10-01 15:59:01.759246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.800 [2024-10-01 15:59:01.759267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.800 [2024-10-01 15:59:01.759398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.800 [2024-10-01 15:59:01.759411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.800 [2024-10-01 15:59:01.759418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.800 [2024-10-01 15:59:01.759560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.800 [2024-10-01 15:59:01.759570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.800 [2024-10-01 15:59:01.759577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.800 [2024-10-01 15:59:01.759706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.800 [2024-10-01 15:59:01.759717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.800 [2024-10-01 15:59:01.759855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.800 [2024-10-01 15:59:01.759871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.800 [2024-10-01 15:59:01.759878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.800 [2024-10-01 15:59:01.759887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.800 [2024-10-01 15:59:01.759893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.800 [2024-10-01 15:59:01.759899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.800 [2024-10-01 15:59:01.759929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.800 [2024-10-01 15:59:01.759937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.800 [2024-10-01 15:59:01.770161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.800 [2024-10-01 15:59:01.770183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.800 [2024-10-01 15:59:01.770438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.800 [2024-10-01 15:59:01.770454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.800 [2024-10-01 15:59:01.770464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.800 [2024-10-01 15:59:01.770598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.800 [2024-10-01 15:59:01.770608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.800 [2024-10-01 15:59:01.770615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.800 [2024-10-01 15:59:01.770757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.800 [2024-10-01 15:59:01.770769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.800 [2024-10-01 15:59:01.770915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.800 [2024-10-01 15:59:01.770924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.800 [2024-10-01 15:59:01.770931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.800 [2024-10-01 15:59:01.770940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.800 [2024-10-01 15:59:01.770945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.800 [2024-10-01 15:59:01.770951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.800 [2024-10-01 15:59:01.770981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.800 [2024-10-01 15:59:01.770989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.800 [2024-10-01 15:59:01.781651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.800 [2024-10-01 15:59:01.781673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.800 [2024-10-01 15:59:01.781789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.800 [2024-10-01 15:59:01.781801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.800 [2024-10-01 15:59:01.781808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.800 [2024-10-01 15:59:01.782024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.800 [2024-10-01 15:59:01.782036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.800 [2024-10-01 15:59:01.782043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.800 [2024-10-01 15:59:01.782055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.800 [2024-10-01 15:59:01.782065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.800 [2024-10-01 15:59:01.782074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.800 [2024-10-01 15:59:01.782081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.800 [2024-10-01 15:59:01.782087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.800 [2024-10-01 15:59:01.782096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.800 [2024-10-01 15:59:01.782102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.800 [2024-10-01 15:59:01.782108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.800 [2024-10-01 15:59:01.782125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.800 [2024-10-01 15:59:01.782132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.800 [2024-10-01 15:59:01.794311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.800 [2024-10-01 15:59:01.794333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.800 [2024-10-01 15:59:01.794599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.800 [2024-10-01 15:59:01.794615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.800 [2024-10-01 15:59:01.794623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.800 [2024-10-01 15:59:01.794770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.800 [2024-10-01 15:59:01.794780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.800 [2024-10-01 15:59:01.794787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.800 [2024-10-01 15:59:01.794936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.800 [2024-10-01 15:59:01.794949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.800 [2024-10-01 15:59:01.795096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.800 [2024-10-01 15:59:01.795106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.800 [2024-10-01 15:59:01.795112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.800 [2024-10-01 15:59:01.795121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.800 [2024-10-01 15:59:01.795128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.800 [2024-10-01 15:59:01.795134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.800 [2024-10-01 15:59:01.795163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.800 [2024-10-01 15:59:01.795170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.800 [2024-10-01 15:59:01.806470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.800 [2024-10-01 15:59:01.806491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.800 [2024-10-01 15:59:01.806604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.800 [2024-10-01 15:59:01.806616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.800 [2024-10-01 15:59:01.806623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.801 [2024-10-01 15:59:01.806766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.801 [2024-10-01 15:59:01.806776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.801 [2024-10-01 15:59:01.806782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.801 [2024-10-01 15:59:01.806794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.801 [2024-10-01 15:59:01.806803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.801 [2024-10-01 15:59:01.806816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.801 [2024-10-01 15:59:01.806822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.801 [2024-10-01 15:59:01.806829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.801 [2024-10-01 15:59:01.806837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.801 [2024-10-01 15:59:01.806842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.801 [2024-10-01 15:59:01.806848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.801 [2024-10-01 15:59:01.806868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.801 [2024-10-01 15:59:01.806874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.801 [2024-10-01 15:59:01.818295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.801 [2024-10-01 15:59:01.818317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.801 [2024-10-01 15:59:01.818781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.801 [2024-10-01 15:59:01.818800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.801 [2024-10-01 15:59:01.818809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.801 [2024-10-01 15:59:01.818954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.801 [2024-10-01 15:59:01.818965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.801 [2024-10-01 15:59:01.818972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.801 [2024-10-01 15:59:01.819327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.801 [2024-10-01 15:59:01.819342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.801 [2024-10-01 15:59:01.819495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.801 [2024-10-01 15:59:01.819506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.801 [2024-10-01 15:59:01.819512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.801 [2024-10-01 15:59:01.819521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.801 [2024-10-01 15:59:01.819528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.801 [2024-10-01 15:59:01.819534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.801 [2024-10-01 15:59:01.819689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.801 [2024-10-01 15:59:01.819699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.801 [2024-10-01 15:59:01.829823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.801 [2024-10-01 15:59:01.829845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.801 [2024-10-01 15:59:01.830100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.801 [2024-10-01 15:59:01.830117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.801 [2024-10-01 15:59:01.830125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.801 [2024-10-01 15:59:01.830326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.801 [2024-10-01 15:59:01.830337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.801 [2024-10-01 15:59:01.830344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.801 [2024-10-01 15:59:01.830578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.801 [2024-10-01 15:59:01.830592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.801 [2024-10-01 15:59:01.830737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.801 [2024-10-01 15:59:01.830747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.801 [2024-10-01 15:59:01.830754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.801 [2024-10-01 15:59:01.830763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.801 [2024-10-01 15:59:01.830769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.801 [2024-10-01 15:59:01.830775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.801 [2024-10-01 15:59:01.830806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.801 [2024-10-01 15:59:01.830814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.801 [2024-10-01 15:59:01.841029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.801 [2024-10-01 15:59:01.841051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.801 [2024-10-01 15:59:01.841319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.801 [2024-10-01 15:59:01.841336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.801 [2024-10-01 15:59:01.841343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.801 [2024-10-01 15:59:01.841518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.801 [2024-10-01 15:59:01.841529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.801 [2024-10-01 15:59:01.841536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.801 [2024-10-01 15:59:01.841681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.801 [2024-10-01 15:59:01.841693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.801 [2024-10-01 15:59:01.841719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.801 [2024-10-01 15:59:01.841727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.801 [2024-10-01 15:59:01.841733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.801 [2024-10-01 15:59:01.841742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.801 [2024-10-01 15:59:01.841748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.801 [2024-10-01 15:59:01.841755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.801 [2024-10-01 15:59:01.841768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.801 [2024-10-01 15:59:01.841778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.801 [2024-10-01 15:59:01.851955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.801 [2024-10-01 15:59:01.851977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.801 [2024-10-01 15:59:01.852087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.801 [2024-10-01 15:59:01.852100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.801 [2024-10-01 15:59:01.852107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.801 [2024-10-01 15:59:01.852246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.801 [2024-10-01 15:59:01.852256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.801 [2024-10-01 15:59:01.852263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.801 [2024-10-01 15:59:01.852274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.801 [2024-10-01 15:59:01.852284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.802 [2024-10-01 15:59:01.852294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.802 [2024-10-01 15:59:01.852301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.802 [2024-10-01 15:59:01.852308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.802 [2024-10-01 15:59:01.852317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.802 [2024-10-01 15:59:01.852322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.802 [2024-10-01 15:59:01.852329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.802 [2024-10-01 15:59:01.852342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.802 [2024-10-01 15:59:01.852349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.802 [2024-10-01 15:59:01.863665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.802 [2024-10-01 15:59:01.863687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.802 [2024-10-01 15:59:01.863936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.802 [2024-10-01 15:59:01.863952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.802 [2024-10-01 15:59:01.863960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.802 [2024-10-01 15:59:01.864029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.802 [2024-10-01 15:59:01.864039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.802 [2024-10-01 15:59:01.864046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.802 [2024-10-01 15:59:01.864199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.802 [2024-10-01 15:59:01.864212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.802 [2024-10-01 15:59:01.864238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.802 [2024-10-01 15:59:01.864245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.802 [2024-10-01 15:59:01.864256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.802 [2024-10-01 15:59:01.864266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.802 [2024-10-01 15:59:01.864272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.802 [2024-10-01 15:59:01.864278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.802 [2024-10-01 15:59:01.864292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.802 [2024-10-01 15:59:01.864299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.802 [2024-10-01 15:59:01.874145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.802 [2024-10-01 15:59:01.874165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.802 [2024-10-01 15:59:01.874269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.802 [2024-10-01 15:59:01.874282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.802 [2024-10-01 15:59:01.874289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.802 [2024-10-01 15:59:01.874431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.802 [2024-10-01 15:59:01.874441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.802 [2024-10-01 15:59:01.874447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.802 [2024-10-01 15:59:01.874459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.802 [2024-10-01 15:59:01.874468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.802 [2024-10-01 15:59:01.874478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.802 [2024-10-01 15:59:01.874484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.802 [2024-10-01 15:59:01.874491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.802 [2024-10-01 15:59:01.874499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.802 [2024-10-01 15:59:01.874505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.802 [2024-10-01 15:59:01.874511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.802 [2024-10-01 15:59:01.874525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.802 [2024-10-01 15:59:01.874532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.802 [2024-10-01 15:59:01.886339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.802 [2024-10-01 15:59:01.886362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.802 [2024-10-01 15:59:01.886526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.802 [2024-10-01 15:59:01.886539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.802 [2024-10-01 15:59:01.886546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.802 [2024-10-01 15:59:01.886635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.802 [2024-10-01 15:59:01.886648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.802 [2024-10-01 15:59:01.886655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.802 [2024-10-01 15:59:01.886667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.802 [2024-10-01 15:59:01.886676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.802 [2024-10-01 15:59:01.886694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.802 [2024-10-01 15:59:01.886701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.802 [2024-10-01 15:59:01.886707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.802 [2024-10-01 15:59:01.886715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.802 [2024-10-01 15:59:01.886721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.802 [2024-10-01 15:59:01.886727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.802 [2024-10-01 15:59:01.886740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.802 [2024-10-01 15:59:01.886747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.802 [2024-10-01 15:59:01.898195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.802 [2024-10-01 15:59:01.898216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.802 [2024-10-01 15:59:01.898587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.802 [2024-10-01 15:59:01.898603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.802 [2024-10-01 15:59:01.898611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.802 [2024-10-01 15:59:01.898814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.802 [2024-10-01 15:59:01.898825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.802 [2024-10-01 15:59:01.898832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.802 [2024-10-01 15:59:01.899035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.802 [2024-10-01 15:59:01.899050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.802 [2024-10-01 15:59:01.899077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.802 [2024-10-01 15:59:01.899084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.802 [2024-10-01 15:59:01.899091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.802 [2024-10-01 15:59:01.899100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.802 [2024-10-01 15:59:01.899105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.802 [2024-10-01 15:59:01.899111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.802 [2024-10-01 15:59:01.899124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.803 [2024-10-01 15:59:01.899131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.803 [2024-10-01 15:59:01.908627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.803 [2024-10-01 15:59:01.908648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.803 [2024-10-01 15:59:01.908820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.803 [2024-10-01 15:59:01.908833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.803 [2024-10-01 15:59:01.908840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.803 [2024-10-01 15:59:01.908945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.803 [2024-10-01 15:59:01.908955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.803 [2024-10-01 15:59:01.908962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.803 [2024-10-01 15:59:01.908973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.803 [2024-10-01 15:59:01.908983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.803 [2024-10-01 15:59:01.908993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.803 [2024-10-01 15:59:01.908999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.803 [2024-10-01 15:59:01.909005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.803 [2024-10-01 15:59:01.909014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.803 [2024-10-01 15:59:01.909020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.803 [2024-10-01 15:59:01.909026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.803 [2024-10-01 15:59:01.909040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.803 [2024-10-01 15:59:01.909046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.803 [2024-10-01 15:59:01.921158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.803 [2024-10-01 15:59:01.921179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.803 [2024-10-01 15:59:01.921278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.803 [2024-10-01 15:59:01.921290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.803 [2024-10-01 15:59:01.921298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.803 [2024-10-01 15:59:01.921368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.803 [2024-10-01 15:59:01.921377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.803 [2024-10-01 15:59:01.921383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.803 [2024-10-01 15:59:01.921660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.803 [2024-10-01 15:59:01.921673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.803 [2024-10-01 15:59:01.921821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.803 [2024-10-01 15:59:01.921830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.803 [2024-10-01 15:59:01.921840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.803 [2024-10-01 15:59:01.921849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.803 [2024-10-01 15:59:01.921855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.803 [2024-10-01 15:59:01.921861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.803 [2024-10-01 15:59:01.921899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.803 [2024-10-01 15:59:01.921906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.803 [2024-10-01 15:59:01.932644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.803 [2024-10-01 15:59:01.932666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.803 [2024-10-01 15:59:01.933015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.803 [2024-10-01 15:59:01.933032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.803 [2024-10-01 15:59:01.933040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.803 [2024-10-01 15:59:01.933186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.803 [2024-10-01 15:59:01.933196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.803 [2024-10-01 15:59:01.933202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.803 [2024-10-01 15:59:01.933350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.803 [2024-10-01 15:59:01.933366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.803 [2024-10-01 15:59:01.933515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.803 [2024-10-01 15:59:01.933526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.803 [2024-10-01 15:59:01.933533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.803 [2024-10-01 15:59:01.933542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.803 [2024-10-01 15:59:01.933548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.803 [2024-10-01 15:59:01.933554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.803 [2024-10-01 15:59:01.933696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.803 [2024-10-01 15:59:01.933706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.803 [2024-10-01 15:59:01.943914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.803 [2024-10-01 15:59:01.943935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.803 [2024-10-01 15:59:01.944147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.803 [2024-10-01 15:59:01.944159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.803 [2024-10-01 15:59:01.944166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.803 [2024-10-01 15:59:01.944256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.803 [2024-10-01 15:59:01.944265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.803 [2024-10-01 15:59:01.944283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.803 [2024-10-01 15:59:01.944294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.803 [2024-10-01 15:59:01.944303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.803 [2024-10-01 15:59:01.944313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.803 [2024-10-01 15:59:01.944319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.803 [2024-10-01 15:59:01.944325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.803 [2024-10-01 15:59:01.944334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.803 [2024-10-01 15:59:01.944339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.803 [2024-10-01 15:59:01.944346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.803 [2024-10-01 15:59:01.944359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.803 [2024-10-01 15:59:01.944366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.803 [2024-10-01 15:59:01.955402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.803 [2024-10-01 15:59:01.955423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.803 [2024-10-01 15:59:01.955716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.803 [2024-10-01 15:59:01.955732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.803 [2024-10-01 15:59:01.955739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.803 [2024-10-01 15:59:01.955909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.803 [2024-10-01 15:59:01.955919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.803 [2024-10-01 15:59:01.955926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.803 [2024-10-01 15:59:01.956127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.803 [2024-10-01 15:59:01.956141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.803 [2024-10-01 15:59:01.956167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.803 [2024-10-01 15:59:01.956175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.803 [2024-10-01 15:59:01.956181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.804 [2024-10-01 15:59:01.956190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.804 [2024-10-01 15:59:01.956196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.804 [2024-10-01 15:59:01.956201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.804 [2024-10-01 15:59:01.956215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.804 [2024-10-01 15:59:01.956222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.804 [2024-10-01 15:59:01.966635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.804 [2024-10-01 15:59:01.966656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.804 [2024-10-01 15:59:01.966762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.804 [2024-10-01 15:59:01.966775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.804 [2024-10-01 15:59:01.966782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.804 [2024-10-01 15:59:01.966930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.804 [2024-10-01 15:59:01.966940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.804 [2024-10-01 15:59:01.966947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.804 [2024-10-01 15:59:01.967284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.804 [2024-10-01 15:59:01.967298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.804 [2024-10-01 15:59:01.967455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.804 [2024-10-01 15:59:01.967466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.804 [2024-10-01 15:59:01.967472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.804 [2024-10-01 15:59:01.967481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.804 [2024-10-01 15:59:01.967487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.804 [2024-10-01 15:59:01.967493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.804 [2024-10-01 15:59:01.967665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.804 [2024-10-01 15:59:01.967674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.804 [2024-10-01 15:59:01.978408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.804 [2024-10-01 15:59:01.978431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.804 [2024-10-01 15:59:01.978711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.804 [2024-10-01 15:59:01.978727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.804 [2024-10-01 15:59:01.978735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.804 [2024-10-01 15:59:01.978888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.804 [2024-10-01 15:59:01.978899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.804 [2024-10-01 15:59:01.978906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.804 [2024-10-01 15:59:01.979156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.804 [2024-10-01 15:59:01.979169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.804 [2024-10-01 15:59:01.979317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.804 [2024-10-01 15:59:01.979327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.804 [2024-10-01 15:59:01.979334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.804 [2024-10-01 15:59:01.979343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.804 [2024-10-01 15:59:01.979353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.804 [2024-10-01 15:59:01.979359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.804 [2024-10-01 15:59:01.979389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.804 [2024-10-01 15:59:01.979397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.804 [2024-10-01 15:59:01.990404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.804 [2024-10-01 15:59:01.990425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.804 [2024-10-01 15:59:01.990525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.804 [2024-10-01 15:59:01.990537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.804 [2024-10-01 15:59:01.990545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.804 [2024-10-01 15:59:01.990692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.804 [2024-10-01 15:59:01.990701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.804 [2024-10-01 15:59:01.990708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.804 [2024-10-01 15:59:01.990719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.804 [2024-10-01 15:59:01.990728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.804 [2024-10-01 15:59:01.990738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.804 [2024-10-01 15:59:01.990744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.804 [2024-10-01 15:59:01.990750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.804 [2024-10-01 15:59:01.990759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.804 [2024-10-01 15:59:01.990765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.804 [2024-10-01 15:59:01.990771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.804 [2024-10-01 15:59:01.990784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.804 [2024-10-01 15:59:01.990791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.804 11356.90 IOPS, 44.36 MiB/s [2024-10-01 15:59:02.001981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.804 [2024-10-01 15:59:02.002003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.804 [2024-10-01 15:59:02.002247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.804 [2024-10-01 15:59:02.002269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.804 [2024-10-01 15:59:02.002276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.804 [2024-10-01 15:59:02.002417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.804 [2024-10-01 15:59:02.002426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.804 [2024-10-01 15:59:02.002433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.804 [2024-10-01 15:59:02.002788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.804 [2024-10-01 15:59:02.002803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.804 [2024-10-01 15:59:02.002961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.804 [2024-10-01 15:59:02.002972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.804 [2024-10-01 15:59:02.002978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.804 [2024-10-01 15:59:02.002988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.805 [2024-10-01 15:59:02.002994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.805 [2024-10-01 15:59:02.003000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.805 [2024-10-01 15:59:02.003141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.805 [2024-10-01 15:59:02.003151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.805 [2024-10-01 15:59:02.012199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.805 [2024-10-01 15:59:02.012219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.805 [2024-10-01 15:59:02.012384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.805 [2024-10-01 15:59:02.012396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.805 [2024-10-01 15:59:02.012404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.805 [2024-10-01 15:59:02.012551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.805 [2024-10-01 15:59:02.012560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.805 [2024-10-01 15:59:02.012567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.805 [2024-10-01 15:59:02.012578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.805 [2024-10-01 15:59:02.012587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.805 [2024-10-01 15:59:02.012597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.805 [2024-10-01 15:59:02.012603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.805 [2024-10-01 15:59:02.012610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.805 [2024-10-01 15:59:02.012618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.805 [2024-10-01 15:59:02.012623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.805 [2024-10-01 15:59:02.012629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.805 [2024-10-01 15:59:02.012642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.805 [2024-10-01 15:59:02.012649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.805 [2024-10-01 15:59:02.024559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.805 [2024-10-01 15:59:02.024580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.805 [2024-10-01 15:59:02.024695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.805 [2024-10-01 15:59:02.024707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.805 [2024-10-01 15:59:02.024715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.805 [2024-10-01 15:59:02.024800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.805 [2024-10-01 15:59:02.024809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.805 [2024-10-01 15:59:02.024816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.805 [2024-10-01 15:59:02.024828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.805 [2024-10-01 15:59:02.024837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.805 [2024-10-01 15:59:02.024846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.805 [2024-10-01 15:59:02.024853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.805 [2024-10-01 15:59:02.024859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.805 [2024-10-01 15:59:02.024873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.805 [2024-10-01 15:59:02.024879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.805 [2024-10-01 15:59:02.024885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.805 [2024-10-01 15:59:02.024898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.805 [2024-10-01 15:59:02.024905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.805 [2024-10-01 15:59:02.035218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.805 [2024-10-01 15:59:02.035239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.805 [2024-10-01 15:59:02.035360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.805 [2024-10-01 15:59:02.035372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.805 [2024-10-01 15:59:02.035379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.805 [2024-10-01 15:59:02.035535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.805 [2024-10-01 15:59:02.035545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.805 [2024-10-01 15:59:02.035551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.805 [2024-10-01 15:59:02.035563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.805 [2024-10-01 15:59:02.035573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.805 [2024-10-01 15:59:02.035583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.805 [2024-10-01 15:59:02.035589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.805 [2024-10-01 15:59:02.035595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.805 [2024-10-01 15:59:02.035603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.805 [2024-10-01 15:59:02.035609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.805 [2024-10-01 15:59:02.035619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.805 [2024-10-01 15:59:02.035633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.805 [2024-10-01 15:59:02.035639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.805 [2024-10-01 15:59:02.047631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.805 [2024-10-01 15:59:02.047653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.805 [2024-10-01 15:59:02.047923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.805 [2024-10-01 15:59:02.047940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.805 [2024-10-01 15:59:02.047948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.805 [2024-10-01 15:59:02.048091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.805 [2024-10-01 15:59:02.048100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.805 [2024-10-01 15:59:02.048107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.805 [2024-10-01 15:59:02.048263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.805 [2024-10-01 15:59:02.048276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.805 [2024-10-01 15:59:02.048414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.805 [2024-10-01 15:59:02.048425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.805 [2024-10-01 15:59:02.048431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.805 [2024-10-01 15:59:02.048440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.805 [2024-10-01 15:59:02.048446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.805 [2024-10-01 15:59:02.048452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.805 [2024-10-01 15:59:02.048482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.805 [2024-10-01 15:59:02.048489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.805 [2024-10-01 15:59:02.058972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.805 [2024-10-01 15:59:02.058994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.805 [2024-10-01 15:59:02.059331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.805 [2024-10-01 15:59:02.059348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.805 [2024-10-01 15:59:02.059356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.805 [2024-10-01 15:59:02.059506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.806 [2024-10-01 15:59:02.059516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.806 [2024-10-01 15:59:02.059523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.806 [2024-10-01 15:59:02.059776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.806 [2024-10-01 15:59:02.059793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.806 [2024-10-01 15:59:02.059948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.806 [2024-10-01 15:59:02.059959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.806 [2024-10-01 15:59:02.059966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.806 [2024-10-01 15:59:02.059975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.806 [2024-10-01 15:59:02.059981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.806 [2024-10-01 15:59:02.059987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.806 [2024-10-01 15:59:02.060016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.806 [2024-10-01 15:59:02.060024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.806 [2024-10-01 15:59:02.070339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.806 [2024-10-01 15:59:02.070361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.806 [2024-10-01 15:59:02.070588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.806 [2024-10-01 15:59:02.070603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.806 [2024-10-01 15:59:02.070611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.806 [2024-10-01 15:59:02.070737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.806 [2024-10-01 15:59:02.070747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.806 [2024-10-01 15:59:02.070754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.806 [2024-10-01 15:59:02.070934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.806 [2024-10-01 15:59:02.070947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.806 [2024-10-01 15:59:02.070986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.806 [2024-10-01 15:59:02.070994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.806 [2024-10-01 15:59:02.071000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.806 [2024-10-01 15:59:02.071010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.806 [2024-10-01 15:59:02.071016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.806 [2024-10-01 15:59:02.071022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.806 [2024-10-01 15:59:02.071037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.806 [2024-10-01 15:59:02.071043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.806 [2024-10-01 15:59:02.081294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.806 [2024-10-01 15:59:02.081315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.806 [2024-10-01 15:59:02.081432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.806 [2024-10-01 15:59:02.081444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.806 [2024-10-01 15:59:02.081455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.806 [2024-10-01 15:59:02.081602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.806 [2024-10-01 15:59:02.081612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.806 [2024-10-01 15:59:02.081619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.806 [2024-10-01 15:59:02.081631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.806 [2024-10-01 15:59:02.081639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.806 [2024-10-01 15:59:02.081649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.806 [2024-10-01 15:59:02.081655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.806 [2024-10-01 15:59:02.081662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.806 [2024-10-01 15:59:02.081670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.806 [2024-10-01 15:59:02.081676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.806 [2024-10-01 15:59:02.081682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.806 [2024-10-01 15:59:02.082136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.806 [2024-10-01 15:59:02.082147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.806 [2024-10-01 15:59:02.093592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.806 [2024-10-01 15:59:02.093613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.806 [2024-10-01 15:59:02.093960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.806 [2024-10-01 15:59:02.093977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.806 [2024-10-01 15:59:02.093985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.806 [2024-10-01 15:59:02.094124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.806 [2024-10-01 15:59:02.094133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.806 [2024-10-01 15:59:02.094140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.806 [2024-10-01 15:59:02.094288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.806 [2024-10-01 15:59:02.094300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.806 [2024-10-01 15:59:02.094437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.806 [2024-10-01 15:59:02.094447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.806 [2024-10-01 15:59:02.094454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.806 [2024-10-01 15:59:02.094463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.806 [2024-10-01 15:59:02.094469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.806 [2024-10-01 15:59:02.094479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.806 [2024-10-01 15:59:02.094509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.806 [2024-10-01 15:59:02.094516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.806 [2024-10-01 15:59:02.104328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.806 [2024-10-01 15:59:02.104349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.806 [2024-10-01 15:59:02.104586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.806 [2024-10-01 15:59:02.104600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.806 [2024-10-01 15:59:02.104607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.806 [2024-10-01 15:59:02.104820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.806 [2024-10-01 15:59:02.104831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.806 [2024-10-01 15:59:02.104838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.806 [2024-10-01 15:59:02.105288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.806 [2024-10-01 15:59:02.105303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.806 [2024-10-01 15:59:02.105500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.806 [2024-10-01 15:59:02.105511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.806 [2024-10-01 15:59:02.105518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.806 [2024-10-01 15:59:02.105527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.806 [2024-10-01 15:59:02.105533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.806 [2024-10-01 15:59:02.105540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.806 [2024-10-01 15:59:02.105683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.806 [2024-10-01 15:59:02.105693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.806 [2024-10-01 15:59:02.115662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.806 [2024-10-01 15:59:02.115684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.806 [2024-10-01 15:59:02.115771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.807 [2024-10-01 15:59:02.115784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.807 [2024-10-01 15:59:02.115791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.807 [2024-10-01 15:59:02.115939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.807 [2024-10-01 15:59:02.115949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.807 [2024-10-01 15:59:02.115956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.807 [2024-10-01 15:59:02.115968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.807 [2024-10-01 15:59:02.115977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.807 [2024-10-01 15:59:02.115990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.807 [2024-10-01 15:59:02.115996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.807 [2024-10-01 15:59:02.116002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.807 [2024-10-01 15:59:02.116011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.807 [2024-10-01 15:59:02.116017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.807 [2024-10-01 15:59:02.116023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.807 [2024-10-01 15:59:02.116471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.807 [2024-10-01 15:59:02.116481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.807 [2024-10-01 15:59:02.126977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.807 [2024-10-01 15:59:02.126997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.807 [2024-10-01 15:59:02.127231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.807 [2024-10-01 15:59:02.127243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.807 [2024-10-01 15:59:02.127251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.807 [2024-10-01 15:59:02.127474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.807 [2024-10-01 15:59:02.127485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.807 [2024-10-01 15:59:02.127491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.807 [2024-10-01 15:59:02.127942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.807 [2024-10-01 15:59:02.127957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.807 [2024-10-01 15:59:02.128125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.807 [2024-10-01 15:59:02.128135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.807 [2024-10-01 15:59:02.128141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.807 [2024-10-01 15:59:02.128150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.807 [2024-10-01 15:59:02.128157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.807 [2024-10-01 15:59:02.128163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.807 [2024-10-01 15:59:02.128336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.807 [2024-10-01 15:59:02.128346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.807 [2024-10-01 15:59:02.137611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.807 [2024-10-01 15:59:02.137632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.807 [2024-10-01 15:59:02.137811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.807 [2024-10-01 15:59:02.137824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.807 [2024-10-01 15:59:02.137832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.807 [2024-10-01 15:59:02.138047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.807 [2024-10-01 15:59:02.138058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.807 [2024-10-01 15:59:02.138065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.807 [2024-10-01 15:59:02.138076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.807 [2024-10-01 15:59:02.138085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.807 [2024-10-01 15:59:02.138095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.807 [2024-10-01 15:59:02.138101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.807 [2024-10-01 15:59:02.138107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.807 [2024-10-01 15:59:02.138116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.807 [2024-10-01 15:59:02.138122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.807 [2024-10-01 15:59:02.138128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.807 [2024-10-01 15:59:02.138141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.807 [2024-10-01 15:59:02.138148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.807 [2024-10-01 15:59:02.150242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.807 [2024-10-01 15:59:02.150264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.807 [2024-10-01 15:59:02.150685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.807 [2024-10-01 15:59:02.150703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.807 [2024-10-01 15:59:02.150710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.807 [2024-10-01 15:59:02.150925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.807 [2024-10-01 15:59:02.150937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.807 [2024-10-01 15:59:02.150944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.807 [2024-10-01 15:59:02.151681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.807 [2024-10-01 15:59:02.151698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.807 [2024-10-01 15:59:02.151987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.807 [2024-10-01 15:59:02.151998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.807 [2024-10-01 15:59:02.152005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.807 [2024-10-01 15:59:02.152014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.807 [2024-10-01 15:59:02.152020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.807 [2024-10-01 15:59:02.152027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.807 [2024-10-01 15:59:02.152069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.807 [2024-10-01 15:59:02.152080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.807 [2024-10-01 15:59:02.160352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.807 [2024-10-01 15:59:02.160373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.807 [2024-10-01 15:59:02.160556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.807 [2024-10-01 15:59:02.160569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.807 [2024-10-01 15:59:02.160576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.807 [2024-10-01 15:59:02.160811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.807 [2024-10-01 15:59:02.160822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.807 [2024-10-01 15:59:02.160828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.807 [2024-10-01 15:59:02.161172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.807 [2024-10-01 15:59:02.161188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.807 [2024-10-01 15:59:02.161227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.807 [2024-10-01 15:59:02.161234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.807 [2024-10-01 15:59:02.161241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.807 [2024-10-01 15:59:02.161249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.807 [2024-10-01 15:59:02.161256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.807 [2024-10-01 15:59:02.161262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.807 [2024-10-01 15:59:02.161360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.807 [2024-10-01 15:59:02.161369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.807 [2024-10-01 15:59:02.171261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.807 [2024-10-01 15:59:02.171281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.807 [2024-10-01 15:59:02.171558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.807 [2024-10-01 15:59:02.171572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.808 [2024-10-01 15:59:02.171579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.808 [2024-10-01 15:59:02.171651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.808 [2024-10-01 15:59:02.171660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.808 [2024-10-01 15:59:02.171667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.808 [2024-10-01 15:59:02.172409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.808 [2024-10-01 15:59:02.172427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.808 [2024-10-01 15:59:02.172946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.808 [2024-10-01 15:59:02.172963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.808 [2024-10-01 15:59:02.172970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.808 [2024-10-01 15:59:02.172979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.808 [2024-10-01 15:59:02.172985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.808 [2024-10-01 15:59:02.172991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.808 [2024-10-01 15:59:02.173173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.808 [2024-10-01 15:59:02.173183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.808 [2024-10-01 15:59:02.182364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.808 [2024-10-01 15:59:02.182385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.808 [2024-10-01 15:59:02.182617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.808 [2024-10-01 15:59:02.182630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.808 [2024-10-01 15:59:02.182638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.808 [2024-10-01 15:59:02.182880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.808 [2024-10-01 15:59:02.182892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.808 [2024-10-01 15:59:02.182899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.808 [2024-10-01 15:59:02.183199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.808 [2024-10-01 15:59:02.183213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.808 [2024-10-01 15:59:02.183459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.808 [2024-10-01 15:59:02.183469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.808 [2024-10-01 15:59:02.183476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.808 [2024-10-01 15:59:02.183485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.808 [2024-10-01 15:59:02.183491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.808 [2024-10-01 15:59:02.183497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.808 [2024-10-01 15:59:02.183536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.808 [2024-10-01 15:59:02.183543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.808 [2024-10-01 15:59:02.192442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.808 [2024-10-01 15:59:02.192471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.808 [2024-10-01 15:59:02.192721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.808 [2024-10-01 15:59:02.192734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.808 [2024-10-01 15:59:02.192742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.808 [2024-10-01 15:59:02.192951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.808 [2024-10-01 15:59:02.192966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.808 [2024-10-01 15:59:02.192973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.808 [2024-10-01 15:59:02.192982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.808 [2024-10-01 15:59:02.192993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.808 [2024-10-01 15:59:02.193001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.808 [2024-10-01 15:59:02.193006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.808 [2024-10-01 15:59:02.193012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.808 [2024-10-01 15:59:02.193025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.808 [2024-10-01 15:59:02.193032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.808 [2024-10-01 15:59:02.193038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.808 [2024-10-01 15:59:02.193043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.808 [2024-10-01 15:59:02.193055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.808 [2024-10-01 15:59:02.203660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.808 [2024-10-01 15:59:02.203680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.808 [2024-10-01 15:59:02.203858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.808 [2024-10-01 15:59:02.203875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.808 [2024-10-01 15:59:02.203882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.808 [2024-10-01 15:59:02.204074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.808 [2024-10-01 15:59:02.204084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.808 [2024-10-01 15:59:02.204091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.808 [2024-10-01 15:59:02.204103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.808 [2024-10-01 15:59:02.204112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.808 [2024-10-01 15:59:02.204122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.808 [2024-10-01 15:59:02.204128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.808 [2024-10-01 15:59:02.204135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.808 [2024-10-01 15:59:02.204143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.808 [2024-10-01 15:59:02.204149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.808 [2024-10-01 15:59:02.204155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.808 [2024-10-01 15:59:02.204168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.808 [2024-10-01 15:59:02.204175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.808 [2024-10-01 15:59:02.215598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.808 [2024-10-01 15:59:02.215619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.808 [2024-10-01 15:59:02.216031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.808 [2024-10-01 15:59:02.216049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.808 [2024-10-01 15:59:02.216056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.808 [2024-10-01 15:59:02.216219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.808 [2024-10-01 15:59:02.216228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.808 [2024-10-01 15:59:02.216235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.808 [2024-10-01 15:59:02.216274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.808 [2024-10-01 15:59:02.216285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.808 [2024-10-01 15:59:02.216295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.808 [2024-10-01 15:59:02.216301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.808 [2024-10-01 15:59:02.216308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.808 [2024-10-01 15:59:02.216317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.808 [2024-10-01 15:59:02.216322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.808 [2024-10-01 15:59:02.216329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.808 [2024-10-01 15:59:02.216342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.808 [2024-10-01 15:59:02.216349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.808 [2024-10-01 15:59:02.226875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.808 [2024-10-01 15:59:02.226896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.808 [2024-10-01 15:59:02.227244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.808 [2024-10-01 15:59:02.227260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.809 [2024-10-01 15:59:02.227268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.809 [2024-10-01 15:59:02.227410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.809 [2024-10-01 15:59:02.227419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.809 [2024-10-01 15:59:02.227426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.809 [2024-10-01 15:59:02.227456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.809 [2024-10-01 15:59:02.227467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.809 [2024-10-01 15:59:02.227477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.809 [2024-10-01 15:59:02.227483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.809 [2024-10-01 15:59:02.227495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.809 [2024-10-01 15:59:02.227504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.809 [2024-10-01 15:59:02.227510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.809 [2024-10-01 15:59:02.227517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.809 [2024-10-01 15:59:02.227530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.809 [2024-10-01 15:59:02.227536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.809 [2024-10-01 15:59:02.236956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.809 [2024-10-01 15:59:02.236985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.809 [2024-10-01 15:59:02.237152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.809 [2024-10-01 15:59:02.237164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.809 [2024-10-01 15:59:02.237172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.809 [2024-10-01 15:59:02.237390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.809 [2024-10-01 15:59:02.237400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.809 [2024-10-01 15:59:02.237407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.809 [2024-10-01 15:59:02.237416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.809 [2024-10-01 15:59:02.237427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.809 [2024-10-01 15:59:02.237435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.809 [2024-10-01 15:59:02.237440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.809 [2024-10-01 15:59:02.237447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.809 [2024-10-01 15:59:02.237460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.809 [2024-10-01 15:59:02.237467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.809 [2024-10-01 15:59:02.237472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.809 [2024-10-01 15:59:02.237478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.809 [2024-10-01 15:59:02.237490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.809 [2024-10-01 15:59:02.249207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.809 [2024-10-01 15:59:02.249228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.809 [2024-10-01 15:59:02.249611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.809 [2024-10-01 15:59:02.249627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.809 [2024-10-01 15:59:02.249634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.809 [2024-10-01 15:59:02.249727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.809 [2024-10-01 15:59:02.249736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.809 [2024-10-01 15:59:02.249746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.809 [2024-10-01 15:59:02.250004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.809 [2024-10-01 15:59:02.250018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.809 [2024-10-01 15:59:02.250054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.809 [2024-10-01 15:59:02.250062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.809 [2024-10-01 15:59:02.250068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.809 [2024-10-01 15:59:02.250078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.809 [2024-10-01 15:59:02.250084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.809 [2024-10-01 15:59:02.250090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.809 [2024-10-01 15:59:02.250219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.809 [2024-10-01 15:59:02.250228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.809 [2024-10-01 15:59:02.260277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.809 [2024-10-01 15:59:02.260298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.809 [2024-10-01 15:59:02.260549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.809 [2024-10-01 15:59:02.260563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.809 [2024-10-01 15:59:02.260571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.809 [2024-10-01 15:59:02.260779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.809 [2024-10-01 15:59:02.260790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.809 [2024-10-01 15:59:02.260796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.809 [2024-10-01 15:59:02.260808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.809 [2024-10-01 15:59:02.260817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.809 [2024-10-01 15:59:02.260827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.809 [2024-10-01 15:59:02.260834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.809 [2024-10-01 15:59:02.260840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.809 [2024-10-01 15:59:02.260848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.809 [2024-10-01 15:59:02.260854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.809 [2024-10-01 15:59:02.260861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.809 [2024-10-01 15:59:02.260881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.809 [2024-10-01 15:59:02.260887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.809 [2024-10-01 15:59:02.271455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.809 [2024-10-01 15:59:02.271480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.809 [2024-10-01 15:59:02.271776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.809 [2024-10-01 15:59:02.271793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.809 [2024-10-01 15:59:02.271801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.809 [2024-10-01 15:59:02.271957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.809 [2024-10-01 15:59:02.271968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.809 [2024-10-01 15:59:02.271975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.809 [2024-10-01 15:59:02.272004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.809 [2024-10-01 15:59:02.272014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.809 [2024-10-01 15:59:02.272024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.809 [2024-10-01 15:59:02.272030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.809 [2024-10-01 15:59:02.272037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.809 [2024-10-01 15:59:02.272047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.810 [2024-10-01 15:59:02.272053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.810 [2024-10-01 15:59:02.272059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.810 [2024-10-01 15:59:02.272073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.810 [2024-10-01 15:59:02.272079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.810 [2024-10-01 15:59:02.281915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.810 [2024-10-01 15:59:02.281935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.810 [2024-10-01 15:59:02.282089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.810 [2024-10-01 15:59:02.282102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.810 [2024-10-01 15:59:02.282109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.810 [2024-10-01 15:59:02.282326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.810 [2024-10-01 15:59:02.282335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.810 [2024-10-01 15:59:02.282342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.810 [2024-10-01 15:59:02.282354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.810 [2024-10-01 15:59:02.282363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.810 [2024-10-01 15:59:02.282372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.810 [2024-10-01 15:59:02.282379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.810 [2024-10-01 15:59:02.282385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.810 [2024-10-01 15:59:02.282396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.810 [2024-10-01 15:59:02.282402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.810 [2024-10-01 15:59:02.282408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.810 [2024-10-01 15:59:02.282421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.810 [2024-10-01 15:59:02.282428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.810 [2024-10-01 15:59:02.294334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.810 [2024-10-01 15:59:02.294355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.810 [2024-10-01 15:59:02.294759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.810 [2024-10-01 15:59:02.294775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.810 [2024-10-01 15:59:02.294783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.810 [2024-10-01 15:59:02.294867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.810 [2024-10-01 15:59:02.294877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.810 [2024-10-01 15:59:02.294884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.810 [2024-10-01 15:59:02.295032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.810 [2024-10-01 15:59:02.295044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.810 [2024-10-01 15:59:02.295183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.810 [2024-10-01 15:59:02.295192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.810 [2024-10-01 15:59:02.295199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.810 [2024-10-01 15:59:02.295208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.810 [2024-10-01 15:59:02.295215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.810 [2024-10-01 15:59:02.295221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.810 [2024-10-01 15:59:02.295250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.810 [2024-10-01 15:59:02.295258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.810 [2024-10-01 15:59:02.304440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.810 [2024-10-01 15:59:02.304461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.810 [2024-10-01 15:59:02.304625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.810 [2024-10-01 15:59:02.304638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.810 [2024-10-01 15:59:02.304646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.810 [2024-10-01 15:59:02.304894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.810 [2024-10-01 15:59:02.304905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.810 [2024-10-01 15:59:02.304912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.810 [2024-10-01 15:59:02.305402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.810 [2024-10-01 15:59:02.305416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.810 [2024-10-01 15:59:02.305882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.810 [2024-10-01 15:59:02.305893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.810 [2024-10-01 15:59:02.305900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.810 [2024-10-01 15:59:02.305909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.810 [2024-10-01 15:59:02.305915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.810 [2024-10-01 15:59:02.305922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.810 [2024-10-01 15:59:02.306300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.810 [2024-10-01 15:59:02.306310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.810 [2024-10-01 15:59:02.316511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.810 [2024-10-01 15:59:02.316532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.810 [2024-10-01 15:59:02.316884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.810 [2024-10-01 15:59:02.316901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.810 [2024-10-01 15:59:02.316908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.810 [2024-10-01 15:59:02.317103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.810 [2024-10-01 15:59:02.317114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.810 [2024-10-01 15:59:02.317121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.810 [2024-10-01 15:59:02.317413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.810 [2024-10-01 15:59:02.317428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.810 [2024-10-01 15:59:02.317467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.810 [2024-10-01 15:59:02.317474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.810 [2024-10-01 15:59:02.317481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.810 [2024-10-01 15:59:02.317489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.810 [2024-10-01 15:59:02.317495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.810 [2024-10-01 15:59:02.317501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.810 [2024-10-01 15:59:02.317630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.810 [2024-10-01 15:59:02.317639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.810 [2024-10-01 15:59:02.327992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.810 [2024-10-01 15:59:02.328013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.810 [2024-10-01 15:59:02.328342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.810 [2024-10-01 15:59:02.328358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.810 [2024-10-01 15:59:02.328366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.810 [2024-10-01 15:59:02.328512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.810 [2024-10-01 15:59:02.328522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.810 [2024-10-01 15:59:02.328529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.810 [2024-10-01 15:59:02.328675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.810 [2024-10-01 15:59:02.328687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.810 [2024-10-01 15:59:02.328825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.810 [2024-10-01 15:59:02.328836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.810 [2024-10-01 15:59:02.328843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.810 [2024-10-01 15:59:02.328852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.810 [2024-10-01 15:59:02.328858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.810 [2024-10-01 15:59:02.328870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.810 [2024-10-01 15:59:02.328901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.811 [2024-10-01 15:59:02.328908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.811 [2024-10-01 15:59:02.339470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.811 [2024-10-01 15:59:02.339492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.811 [2024-10-01 15:59:02.339788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.811 [2024-10-01 15:59:02.339804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.811 [2024-10-01 15:59:02.339811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.811 [2024-10-01 15:59:02.340078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.811 [2024-10-01 15:59:02.340090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.811 [2024-10-01 15:59:02.340097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.811 [2024-10-01 15:59:02.340284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.811 [2024-10-01 15:59:02.340299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.811 [2024-10-01 15:59:02.340440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.811 [2024-10-01 15:59:02.340450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.811 [2024-10-01 15:59:02.340457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.811 [2024-10-01 15:59:02.340466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.811 [2024-10-01 15:59:02.340476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.811 [2024-10-01 15:59:02.340482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.811 [2024-10-01 15:59:02.340625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.811 [2024-10-01 15:59:02.340635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.811 [2024-10-01 15:59:02.350980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.811 [2024-10-01 15:59:02.351002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.811 [2024-10-01 15:59:02.351379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.811 [2024-10-01 15:59:02.351395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.811 [2024-10-01 15:59:02.351402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.811 [2024-10-01 15:59:02.351595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.811 [2024-10-01 15:59:02.351606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.811 [2024-10-01 15:59:02.351612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.811 [2024-10-01 15:59:02.351900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.811 [2024-10-01 15:59:02.351914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.811 [2024-10-01 15:59:02.352066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.811 [2024-10-01 15:59:02.352076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.811 [2024-10-01 15:59:02.352083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.811 [2024-10-01 15:59:02.352092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.811 [2024-10-01 15:59:02.352099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.811 [2024-10-01 15:59:02.352105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.811 [2024-10-01 15:59:02.352135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.811 [2024-10-01 15:59:02.352143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.811 [2024-10-01 15:59:02.362418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.811 [2024-10-01 15:59:02.362440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.811 [2024-10-01 15:59:02.362842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.811 [2024-10-01 15:59:02.362859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.811 [2024-10-01 15:59:02.362871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.811 [2024-10-01 15:59:02.363069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.811 [2024-10-01 15:59:02.363079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.811 [2024-10-01 15:59:02.363086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.811 [2024-10-01 15:59:02.363350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.811 [2024-10-01 15:59:02.363369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.811 [2024-10-01 15:59:02.363518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.811 [2024-10-01 15:59:02.363528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.811 [2024-10-01 15:59:02.363535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.811 [2024-10-01 15:59:02.363544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.811 [2024-10-01 15:59:02.363551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.811 [2024-10-01 15:59:02.363557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.811 [2024-10-01 15:59:02.363586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.811 [2024-10-01 15:59:02.363593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.811 [2024-10-01 15:59:02.373914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.811 [2024-10-01 15:59:02.373935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.811 [2024-10-01 15:59:02.374285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.811 [2024-10-01 15:59:02.374301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.811 [2024-10-01 15:59:02.374309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.811 [2024-10-01 15:59:02.374503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.811 [2024-10-01 15:59:02.374513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.811 [2024-10-01 15:59:02.374520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.811 [2024-10-01 15:59:02.374694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.811 [2024-10-01 15:59:02.374708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.811 [2024-10-01 15:59:02.374850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.811 [2024-10-01 15:59:02.374860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.811 [2024-10-01 15:59:02.374874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.811 [2024-10-01 15:59:02.374884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.811 [2024-10-01 15:59:02.374890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.811 [2024-10-01 15:59:02.374896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.811 [2024-10-01 15:59:02.375039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.811 [2024-10-01 15:59:02.375049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.811 [2024-10-01 15:59:02.385426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.811 [2024-10-01 15:59:02.385448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.811 [2024-10-01 15:59:02.385787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.811 [2024-10-01 15:59:02.385804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.811 [2024-10-01 15:59:02.385816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.811 [2024-10-01 15:59:02.386036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.811 [2024-10-01 15:59:02.386049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.811 [2024-10-01 15:59:02.386057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.811 [2024-10-01 15:59:02.386320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.811 [2024-10-01 15:59:02.386335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.811 [2024-10-01 15:59:02.386484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.811 [2024-10-01 15:59:02.386495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.811 [2024-10-01 15:59:02.386503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.811 [2024-10-01 15:59:02.386512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.811 [2024-10-01 15:59:02.386518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.811 [2024-10-01 15:59:02.386525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.811 [2024-10-01 15:59:02.386555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.811 [2024-10-01 15:59:02.386563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.811 [2024-10-01 15:59:02.396892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.811 [2024-10-01 15:59:02.396917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.811 [2024-10-01 15:59:02.397218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.811 [2024-10-01 15:59:02.397234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.811 [2024-10-01 15:59:02.397242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.811 [2024-10-01 15:59:02.397386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.811 [2024-10-01 15:59:02.397396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.811 [2024-10-01 15:59:02.397402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.811 [2024-10-01 15:59:02.397577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.811 [2024-10-01 15:59:02.397591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.811 [2024-10-01 15:59:02.397731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.811 [2024-10-01 15:59:02.397741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.811 [2024-10-01 15:59:02.397748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.811 [2024-10-01 15:59:02.397757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.811 [2024-10-01 15:59:02.397763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.811 [2024-10-01 15:59:02.397773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.812 [2024-10-01 15:59:02.397804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.812 [2024-10-01 15:59:02.397812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.812 [2024-10-01 15:59:02.408362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.812 [2024-10-01 15:59:02.408385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.812 [2024-10-01 15:59:02.408731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.812 [2024-10-01 15:59:02.408748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.812 [2024-10-01 15:59:02.408755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.812 [2024-10-01 15:59:02.408878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.812 [2024-10-01 15:59:02.408888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.812 [2024-10-01 15:59:02.408895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.812 [2024-10-01 15:59:02.409078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.812 [2024-10-01 15:59:02.409092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.812 [2024-10-01 15:59:02.409233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.812 [2024-10-01 15:59:02.409243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.812 [2024-10-01 15:59:02.409250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.812 [2024-10-01 15:59:02.409259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.812 [2024-10-01 15:59:02.409266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.812 [2024-10-01 15:59:02.409272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.812 [2024-10-01 15:59:02.409414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.812 [2024-10-01 15:59:02.409423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.812 [2024-10-01 15:59:02.419814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.812 [2024-10-01 15:59:02.419835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.812 [2024-10-01 15:59:02.420154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.812 [2024-10-01 15:59:02.420171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.812 [2024-10-01 15:59:02.420179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.812 [2024-10-01 15:59:02.420343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.812 [2024-10-01 15:59:02.420354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.812 [2024-10-01 15:59:02.420360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.812 [2024-10-01 15:59:02.420643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.812 [2024-10-01 15:59:02.420657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.812 [2024-10-01 15:59:02.420812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.812 [2024-10-01 15:59:02.420822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.812 [2024-10-01 15:59:02.420829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.812 [2024-10-01 15:59:02.420838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.812 [2024-10-01 15:59:02.420844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.812 [2024-10-01 15:59:02.420850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.812 [2024-10-01 15:59:02.420887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.812 [2024-10-01 15:59:02.420895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.812 [2024-10-01 15:59:02.431086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.812 [2024-10-01 15:59:02.431107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.812 [2024-10-01 15:59:02.431344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.812 [2024-10-01 15:59:02.431357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.812 [2024-10-01 15:59:02.431365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.812 [2024-10-01 15:59:02.431584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.812 [2024-10-01 15:59:02.431595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.812 [2024-10-01 15:59:02.431602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.812 [2024-10-01 15:59:02.431841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.812 [2024-10-01 15:59:02.431854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.812 [2024-10-01 15:59:02.432008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.812 [2024-10-01 15:59:02.432019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.812 [2024-10-01 15:59:02.432026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.812 [2024-10-01 15:59:02.432035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.812 [2024-10-01 15:59:02.432041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.812 [2024-10-01 15:59:02.432047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.812 [2024-10-01 15:59:02.432075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.812 [2024-10-01 15:59:02.432083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.812 [2024-10-01 15:59:02.442690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.812 [2024-10-01 15:59:02.442711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.812 [2024-10-01 15:59:02.442874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.812 [2024-10-01 15:59:02.442888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.812 [2024-10-01 15:59:02.442895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.812 [2024-10-01 15:59:02.443044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.812 [2024-10-01 15:59:02.443054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.812 [2024-10-01 15:59:02.443061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.812 [2024-10-01 15:59:02.443072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.812 [2024-10-01 15:59:02.443081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.812 [2024-10-01 15:59:02.443091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.812 [2024-10-01 15:59:02.443097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.812 [2024-10-01 15:59:02.443104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.812 [2024-10-01 15:59:02.443112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.812 [2024-10-01 15:59:02.443118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.812 [2024-10-01 15:59:02.443124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.812 [2024-10-01 15:59:02.443138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.812 [2024-10-01 15:59:02.443144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.812 [2024-10-01 15:59:02.454522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.812 [2024-10-01 15:59:02.454543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.812 [2024-10-01 15:59:02.454776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.812 [2024-10-01 15:59:02.454788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.812 [2024-10-01 15:59:02.454796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.812 [2024-10-01 15:59:02.455011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.812 [2024-10-01 15:59:02.455022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.812 [2024-10-01 15:59:02.455029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.812 [2024-10-01 15:59:02.455041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.812 [2024-10-01 15:59:02.455051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.812 [2024-10-01 15:59:02.455061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.812 [2024-10-01 15:59:02.455067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.812 [2024-10-01 15:59:02.455073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.812 [2024-10-01 15:59:02.455082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.812 [2024-10-01 15:59:02.455087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.812 [2024-10-01 15:59:02.455094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.812 [2024-10-01 15:59:02.455109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.812 [2024-10-01 15:59:02.455119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.812 [2024-10-01 15:59:02.466214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.812 [2024-10-01 15:59:02.466236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.812 [2024-10-01 15:59:02.466488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.812 [2024-10-01 15:59:02.466502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.812 [2024-10-01 15:59:02.466509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.812 [2024-10-01 15:59:02.466716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.812 [2024-10-01 15:59:02.466727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.812 [2024-10-01 15:59:02.466734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.812 [2024-10-01 15:59:02.466746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.813 [2024-10-01 15:59:02.466755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.813 [2024-10-01 15:59:02.466765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.813 [2024-10-01 15:59:02.466771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.813 [2024-10-01 15:59:02.466777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.813 [2024-10-01 15:59:02.466785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.813 [2024-10-01 15:59:02.466792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.813 [2024-10-01 15:59:02.466798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.813 [2024-10-01 15:59:02.466811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.813 [2024-10-01 15:59:02.466818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.813 [2024-10-01 15:59:02.477826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.813 [2024-10-01 15:59:02.477847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.813 [2024-10-01 15:59:02.478088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.813 [2024-10-01 15:59:02.478107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.813 [2024-10-01 15:59:02.478114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.813 [2024-10-01 15:59:02.478236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.813 [2024-10-01 15:59:02.478246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.813 [2024-10-01 15:59:02.478252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.813 [2024-10-01 15:59:02.478264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.813 [2024-10-01 15:59:02.478273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.813 [2024-10-01 15:59:02.478283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.813 [2024-10-01 15:59:02.478293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.813 [2024-10-01 15:59:02.478299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.813 [2024-10-01 15:59:02.478308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.813 [2024-10-01 15:59:02.478314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.813 [2024-10-01 15:59:02.478320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.813 [2024-10-01 15:59:02.478333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.813 [2024-10-01 15:59:02.478339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.813 [2024-10-01 15:59:02.489619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.813 [2024-10-01 15:59:02.489640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.813 [2024-10-01 15:59:02.489787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.813 [2024-10-01 15:59:02.489800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.813 [2024-10-01 15:59:02.489807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.813 [2024-10-01 15:59:02.490027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.813 [2024-10-01 15:59:02.490037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.813 [2024-10-01 15:59:02.490044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.813 [2024-10-01 15:59:02.490055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.813 [2024-10-01 15:59:02.490065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.813 [2024-10-01 15:59:02.490083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.813 [2024-10-01 15:59:02.490090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.813 [2024-10-01 15:59:02.490097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.813 [2024-10-01 15:59:02.490105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.813 [2024-10-01 15:59:02.490111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.813 [2024-10-01 15:59:02.490117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.813 [2024-10-01 15:59:02.490131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.813 [2024-10-01 15:59:02.490137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.813 [2024-10-01 15:59:02.502622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.813 [2024-10-01 15:59:02.502644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.813 [2024-10-01 15:59:02.502878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.813 [2024-10-01 15:59:02.502891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.813 [2024-10-01 15:59:02.502898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.813 [2024-10-01 15:59:02.503092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.813 [2024-10-01 15:59:02.503106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.813 [2024-10-01 15:59:02.503113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.813 [2024-10-01 15:59:02.503125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.813 [2024-10-01 15:59:02.503134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.813 [2024-10-01 15:59:02.503152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.813 [2024-10-01 15:59:02.503159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.813 [2024-10-01 15:59:02.503165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.813 [2024-10-01 15:59:02.503174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.813 [2024-10-01 15:59:02.503180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.813 [2024-10-01 15:59:02.503186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.813 [2024-10-01 15:59:02.503199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.813 [2024-10-01 15:59:02.503206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.813 [2024-10-01 15:59:02.514492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.813 [2024-10-01 15:59:02.514514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.813 [2024-10-01 15:59:02.514834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.813 [2024-10-01 15:59:02.514850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.813 [2024-10-01 15:59:02.514857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.813 [2024-10-01 15:59:02.515029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.813 [2024-10-01 15:59:02.515039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.813 [2024-10-01 15:59:02.515046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.813 [2024-10-01 15:59:02.515190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.813 [2024-10-01 15:59:02.515203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.813 [2024-10-01 15:59:02.515341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.813 [2024-10-01 15:59:02.515351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.813 [2024-10-01 15:59:02.515358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.813 [2024-10-01 15:59:02.515367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.813 [2024-10-01 15:59:02.515373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.813 [2024-10-01 15:59:02.515379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.814 [2024-10-01 15:59:02.515523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.814 [2024-10-01 15:59:02.515532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.814 [2024-10-01 15:59:02.526031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.814 [2024-10-01 15:59:02.526051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.814 [2024-10-01 15:59:02.526217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.814 [2024-10-01 15:59:02.526230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.814 [2024-10-01 15:59:02.526237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.814 [2024-10-01 15:59:02.526371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.814 [2024-10-01 15:59:02.526381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.814 [2024-10-01 15:59:02.526387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.814 [2024-10-01 15:59:02.526399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.814 [2024-10-01 15:59:02.526408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.814 [2024-10-01 15:59:02.526418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.814 [2024-10-01 15:59:02.526424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.814 [2024-10-01 15:59:02.526431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.814 [2024-10-01 15:59:02.526440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.814 [2024-10-01 15:59:02.526445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.814 [2024-10-01 15:59:02.526451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.814 [2024-10-01 15:59:02.526464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.814 [2024-10-01 15:59:02.526471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.814 [2024-10-01 15:59:02.538245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.814 [2024-10-01 15:59:02.538266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.814 [2024-10-01 15:59:02.538426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.814 [2024-10-01 15:59:02.538439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.814 [2024-10-01 15:59:02.538446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.814 [2024-10-01 15:59:02.538665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.814 [2024-10-01 15:59:02.538675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.814 [2024-10-01 15:59:02.538682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.814 [2024-10-01 15:59:02.538694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.814 [2024-10-01 15:59:02.538703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.814 [2024-10-01 15:59:02.538712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.814 [2024-10-01 15:59:02.538718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.814 [2024-10-01 15:59:02.538729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.814 [2024-10-01 15:59:02.538737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.814 [2024-10-01 15:59:02.538743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.814 [2024-10-01 15:59:02.538750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.814 [2024-10-01 15:59:02.538763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.814 [2024-10-01 15:59:02.538770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.814 [2024-10-01 15:59:02.550222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.814 [2024-10-01 15:59:02.550243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.814 [2024-10-01 15:59:02.550629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.814 [2024-10-01 15:59:02.550645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.814 [2024-10-01 15:59:02.550653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.814 [2024-10-01 15:59:02.550810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.814 [2024-10-01 15:59:02.550820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.814 [2024-10-01 15:59:02.550827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.814 [2024-10-01 15:59:02.551037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.814 [2024-10-01 15:59:02.551050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.814 [2024-10-01 15:59:02.551195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.814 [2024-10-01 15:59:02.551204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.814 [2024-10-01 15:59:02.551210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.814 [2024-10-01 15:59:02.551219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.814 [2024-10-01 15:59:02.551225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.814 [2024-10-01 15:59:02.551232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.814 [2024-10-01 15:59:02.551263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.814 [2024-10-01 15:59:02.551270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.814 [2024-10-01 15:59:02.561733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.814 [2024-10-01 15:59:02.561754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.814 [2024-10-01 15:59:02.561998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.814 [2024-10-01 15:59:02.562011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.814 [2024-10-01 15:59:02.562018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.814 [2024-10-01 15:59:02.562231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.814 [2024-10-01 15:59:02.562241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.814 [2024-10-01 15:59:02.562251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.814 [2024-10-01 15:59:02.562263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.814 [2024-10-01 15:59:02.562272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.814 [2024-10-01 15:59:02.562281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.814 [2024-10-01 15:59:02.562287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.814 [2024-10-01 15:59:02.562293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.814 [2024-10-01 15:59:02.562302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.814 [2024-10-01 15:59:02.562308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.814 [2024-10-01 15:59:02.562313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.814 [2024-10-01 15:59:02.562327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.814 [2024-10-01 15:59:02.562334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.814 [2024-10-01 15:59:02.573260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.814 [2024-10-01 15:59:02.573281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.814 [2024-10-01 15:59:02.573519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.814 [2024-10-01 15:59:02.573531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.814 [2024-10-01 15:59:02.573539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.814 [2024-10-01 15:59:02.573776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.814 [2024-10-01 15:59:02.573787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.814 [2024-10-01 15:59:02.573793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.814 [2024-10-01 15:59:02.574421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.814 [2024-10-01 15:59:02.574437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.814 [2024-10-01 15:59:02.574580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.814 [2024-10-01 15:59:02.574589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.814 [2024-10-01 15:59:02.574596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.814 [2024-10-01 15:59:02.574605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.814 [2024-10-01 15:59:02.574612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.814 [2024-10-01 15:59:02.574618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.814 [2024-10-01 15:59:02.575549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.814 [2024-10-01 15:59:02.575564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.814 [2024-10-01 15:59:02.583762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.814 [2024-10-01 15:59:02.583786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.814 [2024-10-01 15:59:02.584017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.814 [2024-10-01 15:59:02.584031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.814 [2024-10-01 15:59:02.584039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.814 [2024-10-01 15:59:02.584253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.814 [2024-10-01 15:59:02.584264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.814 [2024-10-01 15:59:02.584270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.814 [2024-10-01 15:59:02.584283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.814 [2024-10-01 15:59:02.584292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.814 [2024-10-01 15:59:02.584302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.814 [2024-10-01 15:59:02.584308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.814 [2024-10-01 15:59:02.584315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.814 [2024-10-01 15:59:02.584323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.814 [2024-10-01 15:59:02.584330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.814 [2024-10-01 15:59:02.584338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.814 [2024-10-01 15:59:02.584351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.814 [2024-10-01 15:59:02.584358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.815 [2024-10-01 15:59:02.594938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.815 [2024-10-01 15:59:02.594960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.815 [2024-10-01 15:59:02.595261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.815 [2024-10-01 15:59:02.595277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.815 [2024-10-01 15:59:02.595284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.815 [2024-10-01 15:59:02.595483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.815 [2024-10-01 15:59:02.595494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.815 [2024-10-01 15:59:02.595501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.815 [2024-10-01 15:59:02.595644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.815 [2024-10-01 15:59:02.595656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.815 [2024-10-01 15:59:02.595793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.815 [2024-10-01 15:59:02.595802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.815 [2024-10-01 15:59:02.595809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.815 [2024-10-01 15:59:02.595822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.815 [2024-10-01 15:59:02.595828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.815 [2024-10-01 15:59:02.595834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.815 [2024-10-01 15:59:02.595904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.815 [2024-10-01 15:59:02.595913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.815 [2024-10-01 15:59:02.605456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.815 [2024-10-01 15:59:02.605476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.815 [2024-10-01 15:59:02.605710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.815 [2024-10-01 15:59:02.605723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.815 [2024-10-01 15:59:02.605730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.815 [2024-10-01 15:59:02.605921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.815 [2024-10-01 15:59:02.605933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.815 [2024-10-01 15:59:02.605940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.815 [2024-10-01 15:59:02.605951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.815 [2024-10-01 15:59:02.605961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.815 [2024-10-01 15:59:02.605970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.815 [2024-10-01 15:59:02.605977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.815 [2024-10-01 15:59:02.605983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.815 [2024-10-01 15:59:02.605992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.815 [2024-10-01 15:59:02.605997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.815 [2024-10-01 15:59:02.606004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.815 [2024-10-01 15:59:02.606017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.815 [2024-10-01 15:59:02.606024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.815 [2024-10-01 15:59:02.618161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.815 [2024-10-01 15:59:02.618182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.815 [2024-10-01 15:59:02.618368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.815 [2024-10-01 15:59:02.618380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.815 [2024-10-01 15:59:02.618388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.815 [2024-10-01 15:59:02.618527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.815 [2024-10-01 15:59:02.618536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.815 [2024-10-01 15:59:02.618543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.815 [2024-10-01 15:59:02.618565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.815 [2024-10-01 15:59:02.618575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.815 [2024-10-01 15:59:02.618584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.815 [2024-10-01 15:59:02.618590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.815 [2024-10-01 15:59:02.618596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.815 [2024-10-01 15:59:02.618605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.815 [2024-10-01 15:59:02.618611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.815 [2024-10-01 15:59:02.618617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.815 [2024-10-01 15:59:02.618630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.815 [2024-10-01 15:59:02.618637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.815 [2024-10-01 15:59:02.628872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.815 [2024-10-01 15:59:02.628893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.815 [2024-10-01 15:59:02.629077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.815 [2024-10-01 15:59:02.629089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.815 [2024-10-01 15:59:02.629096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.815 [2024-10-01 15:59:02.629290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.815 [2024-10-01 15:59:02.629299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.815 [2024-10-01 15:59:02.629306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.815 [2024-10-01 15:59:02.629317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.815 [2024-10-01 15:59:02.629327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.815 [2024-10-01 15:59:02.629336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.815 [2024-10-01 15:59:02.629342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.815 [2024-10-01 15:59:02.629348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.815 [2024-10-01 15:59:02.629357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.815 [2024-10-01 15:59:02.629362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.815 [2024-10-01 15:59:02.629369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.815 [2024-10-01 15:59:02.629382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.815 [2024-10-01 15:59:02.629389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.815 [2024-10-01 15:59:02.641151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.815 [2024-10-01 15:59:02.641171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.815 [2024-10-01 15:59:02.641337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.815 [2024-10-01 15:59:02.641349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.815 [2024-10-01 15:59:02.641356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.815 [2024-10-01 15:59:02.641549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.815 [2024-10-01 15:59:02.641559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.815 [2024-10-01 15:59:02.641565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.815 [2024-10-01 15:59:02.641577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.815 [2024-10-01 15:59:02.641585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.815 [2024-10-01 15:59:02.641595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.815 [2024-10-01 15:59:02.641601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.815 [2024-10-01 15:59:02.641607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.815 [2024-10-01 15:59:02.641616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.815 [2024-10-01 15:59:02.641622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.815 [2024-10-01 15:59:02.641627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.815 [2024-10-01 15:59:02.641640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.815 [2024-10-01 15:59:02.641647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.815 [2024-10-01 15:59:02.653430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.815 [2024-10-01 15:59:02.653451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.815 [2024-10-01 15:59:02.653756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.816 [2024-10-01 15:59:02.653771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.816 [2024-10-01 15:59:02.653779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.816 [2024-10-01 15:59:02.653996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.816 [2024-10-01 15:59:02.654007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.816 [2024-10-01 15:59:02.654014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.816 [2024-10-01 15:59:02.654297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.816 [2024-10-01 15:59:02.654311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.816 [2024-10-01 15:59:02.654462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.816 [2024-10-01 15:59:02.654472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.816 [2024-10-01 15:59:02.654478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.816 [2024-10-01 15:59:02.654487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.816 [2024-10-01 15:59:02.654497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.816 [2024-10-01 15:59:02.654503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.816 [2024-10-01 15:59:02.654534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.816 [2024-10-01 15:59:02.654542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.816 [2024-10-01 15:59:02.664492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.816 [2024-10-01 15:59:02.664513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.816 [2024-10-01 15:59:02.664753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.816 [2024-10-01 15:59:02.664766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.816 [2024-10-01 15:59:02.664774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.816 [2024-10-01 15:59:02.664990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.816 [2024-10-01 15:59:02.665001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.816 [2024-10-01 15:59:02.665007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.816 [2024-10-01 15:59:02.665248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.816 [2024-10-01 15:59:02.665260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.816 [2024-10-01 15:59:02.665409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.816 [2024-10-01 15:59:02.665419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.816 [2024-10-01 15:59:02.665426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.816 [2024-10-01 15:59:02.665435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.816 [2024-10-01 15:59:02.665441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.816 [2024-10-01 15:59:02.665448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.816 [2024-10-01 15:59:02.665477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.816 [2024-10-01 15:59:02.665484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.816 [2024-10-01 15:59:02.675913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.816 [2024-10-01 15:59:02.675933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.816 [2024-10-01 15:59:02.676154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.816 [2024-10-01 15:59:02.676166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.816 [2024-10-01 15:59:02.676173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.816 [2024-10-01 15:59:02.676390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.816 [2024-10-01 15:59:02.676400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.816 [2024-10-01 15:59:02.676407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.816 [2024-10-01 15:59:02.676418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.816 [2024-10-01 15:59:02.676431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.816 [2024-10-01 15:59:02.676441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.816 [2024-10-01 15:59:02.676448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.816 [2024-10-01 15:59:02.676453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.816 [2024-10-01 15:59:02.676462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.816 [2024-10-01 15:59:02.676468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.816 [2024-10-01 15:59:02.676474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.816 [2024-10-01 15:59:02.676488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.816 [2024-10-01 15:59:02.676494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.816 [2024-10-01 15:59:02.688308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.816 [2024-10-01 15:59:02.688330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.816 [2024-10-01 15:59:02.688543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.816 [2024-10-01 15:59:02.688556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.816 [2024-10-01 15:59:02.688563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.816 [2024-10-01 15:59:02.688731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.816 [2024-10-01 15:59:02.688740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.816 [2024-10-01 15:59:02.688747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.816 [2024-10-01 15:59:02.688759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.816 [2024-10-01 15:59:02.688768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.816 [2024-10-01 15:59:02.688778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.816 [2024-10-01 15:59:02.688784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.816 [2024-10-01 15:59:02.688790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.816 [2024-10-01 15:59:02.688799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.816 [2024-10-01 15:59:02.688805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.816 [2024-10-01 15:59:02.688811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.816 [2024-10-01 15:59:02.688825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.816 [2024-10-01 15:59:02.688831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.816 [2024-10-01 15:59:02.700076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.816 [2024-10-01 15:59:02.700098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.816 [2024-10-01 15:59:02.700285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.816 [2024-10-01 15:59:02.700301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.816 [2024-10-01 15:59:02.700309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.816 [2024-10-01 15:59:02.700526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.816 [2024-10-01 15:59:02.700537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.816 [2024-10-01 15:59:02.700543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.816 [2024-10-01 15:59:02.700555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.816 [2024-10-01 15:59:02.700564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.816 [2024-10-01 15:59:02.700574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.816 [2024-10-01 15:59:02.700580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.816 [2024-10-01 15:59:02.700586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.816 [2024-10-01 15:59:02.700595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.816 [2024-10-01 15:59:02.700600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.816 [2024-10-01 15:59:02.700606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.816 [2024-10-01 15:59:02.700620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.816 [2024-10-01 15:59:02.700626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.816 [2024-10-01 15:59:02.711719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.816 [2024-10-01 15:59:02.711741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.816 [2024-10-01 15:59:02.712013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.816 [2024-10-01 15:59:02.712028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.816 [2024-10-01 15:59:02.712036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.816 [2024-10-01 15:59:02.712178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.817 [2024-10-01 15:59:02.712187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.817 [2024-10-01 15:59:02.712194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.817 [2024-10-01 15:59:02.712206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.817 [2024-10-01 15:59:02.712215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.817 [2024-10-01 15:59:02.712225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.817 [2024-10-01 15:59:02.712231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.817 [2024-10-01 15:59:02.712237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.817 [2024-10-01 15:59:02.712246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.817 [2024-10-01 15:59:02.712251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.817 [2024-10-01 15:59:02.712261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.817 [2024-10-01 15:59:02.712274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.817 [2024-10-01 15:59:02.712281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.817 [2024-10-01 15:59:02.723868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.817 [2024-10-01 15:59:02.723889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.817 [2024-10-01 15:59:02.724126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.817 [2024-10-01 15:59:02.724139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.817 [2024-10-01 15:59:02.724146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.817 [2024-10-01 15:59:02.724335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.817 [2024-10-01 15:59:02.724345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.817 [2024-10-01 15:59:02.724352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.817 [2024-10-01 15:59:02.724364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.817 [2024-10-01 15:59:02.724373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.817 [2024-10-01 15:59:02.724392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.817 [2024-10-01 15:59:02.724399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.817 [2024-10-01 15:59:02.724405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.817 [2024-10-01 15:59:02.724413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.817 [2024-10-01 15:59:02.724420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.817 [2024-10-01 15:59:02.724426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.817 [2024-10-01 15:59:02.724439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.817 [2024-10-01 15:59:02.724446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.817 [2024-10-01 15:59:02.736016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.817 [2024-10-01 15:59:02.736037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.817 [2024-10-01 15:59:02.736229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.817 [2024-10-01 15:59:02.736242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.817 [2024-10-01 15:59:02.736249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.817 [2024-10-01 15:59:02.736488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.817 [2024-10-01 15:59:02.736499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.817 [2024-10-01 15:59:02.736505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.817 [2024-10-01 15:59:02.736517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.817 [2024-10-01 15:59:02.736527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.817 [2024-10-01 15:59:02.736540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.817 [2024-10-01 15:59:02.736546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.817 [2024-10-01 15:59:02.736552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.817 [2024-10-01 15:59:02.736561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.817 [2024-10-01 15:59:02.736566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.817 [2024-10-01 15:59:02.736573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.817 [2024-10-01 15:59:02.736586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.817 [2024-10-01 15:59:02.736592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.817 [2024-10-01 15:59:02.747116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.817 [2024-10-01 15:59:02.747136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.817 [2024-10-01 15:59:02.747377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.817 [2024-10-01 15:59:02.747395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.817 [2024-10-01 15:59:02.747403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.817 [2024-10-01 15:59:02.747536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.817 [2024-10-01 15:59:02.747545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.817 [2024-10-01 15:59:02.747552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.817 [2024-10-01 15:59:02.747563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.817 [2024-10-01 15:59:02.747573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.817 [2024-10-01 15:59:02.747582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.817 [2024-10-01 15:59:02.747588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.817 [2024-10-01 15:59:02.747595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.817 [2024-10-01 15:59:02.747603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.817 [2024-10-01 15:59:02.747608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.817 [2024-10-01 15:59:02.747615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.817 [2024-10-01 15:59:02.747628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.817 [2024-10-01 15:59:02.747634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.817 [2024-10-01 15:59:02.760034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.817 [2024-10-01 15:59:02.760055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.817 [2024-10-01 15:59:02.760422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.817 [2024-10-01 15:59:02.760438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.817 [2024-10-01 15:59:02.760449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.817 [2024-10-01 15:59:02.760590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.817 [2024-10-01 15:59:02.760599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.817 [2024-10-01 15:59:02.760605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.817 [2024-10-01 15:59:02.760749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.817 [2024-10-01 15:59:02.760762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.817 [2024-10-01 15:59:02.760905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.817 [2024-10-01 15:59:02.760916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.817 [2024-10-01 15:59:02.760923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.817 [2024-10-01 15:59:02.760932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.817 [2024-10-01 15:59:02.760938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.817 [2024-10-01 15:59:02.760944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.817 [2024-10-01 15:59:02.760974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.817 [2024-10-01 15:59:02.760981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.818 [2024-10-01 15:59:02.770455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.818 [2024-10-01 15:59:02.770476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.818 [2024-10-01 15:59:02.770706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.818 [2024-10-01 15:59:02.770719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.818 [2024-10-01 15:59:02.770726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.818 [2024-10-01 15:59:02.770942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.818 [2024-10-01 15:59:02.770953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.818 [2024-10-01 15:59:02.770960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.818 [2024-10-01 15:59:02.771200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.818 [2024-10-01 15:59:02.771213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.818 [2024-10-01 15:59:02.771250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.818 [2024-10-01 15:59:02.771257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.818 [2024-10-01 15:59:02.771263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.818 [2024-10-01 15:59:02.771272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.818 [2024-10-01 15:59:02.771277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.818 [2024-10-01 15:59:02.771283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.818 [2024-10-01 15:59:02.771416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.818 [2024-10-01 15:59:02.771425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.818 [2024-10-01 15:59:02.782300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.818 [2024-10-01 15:59:02.782320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.818 [2024-10-01 15:59:02.782537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.818 [2024-10-01 15:59:02.782549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.818 [2024-10-01 15:59:02.782556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.818 [2024-10-01 15:59:02.782641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.818 [2024-10-01 15:59:02.782650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.818 [2024-10-01 15:59:02.782657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.818 [2024-10-01 15:59:02.782668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.818 [2024-10-01 15:59:02.782677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.818 [2024-10-01 15:59:02.782687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.818 [2024-10-01 15:59:02.782693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.818 [2024-10-01 15:59:02.782699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.818 [2024-10-01 15:59:02.782707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.818 [2024-10-01 15:59:02.782714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.818 [2024-10-01 15:59:02.782720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.818 [2024-10-01 15:59:02.782733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.818 [2024-10-01 15:59:02.782740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.818 [2024-10-01 15:59:02.793431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.818 [2024-10-01 15:59:02.793451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.818 [2024-10-01 15:59:02.793662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.818 [2024-10-01 15:59:02.793675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.818 [2024-10-01 15:59:02.793683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.818 [2024-10-01 15:59:02.793884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.818 [2024-10-01 15:59:02.793894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.818 [2024-10-01 15:59:02.793901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.818 [2024-10-01 15:59:02.793913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.818 [2024-10-01 15:59:02.793922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.818 [2024-10-01 15:59:02.793932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.818 [2024-10-01 15:59:02.793942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.818 [2024-10-01 15:59:02.793948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.818 [2024-10-01 15:59:02.793957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.818 [2024-10-01 15:59:02.793963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.818 [2024-10-01 15:59:02.793969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.818 [2024-10-01 15:59:02.793983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.818 [2024-10-01 15:59:02.793990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.818 [2024-10-01 15:59:02.803956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.818 [2024-10-01 15:59:02.803977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.818 [2024-10-01 15:59:02.804213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.818 [2024-10-01 15:59:02.804227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.818 [2024-10-01 15:59:02.804235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.818 [2024-10-01 15:59:02.804441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.818 [2024-10-01 15:59:02.804451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.818 [2024-10-01 15:59:02.804457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.818 [2024-10-01 15:59:02.804469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.818 [2024-10-01 15:59:02.804478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.818 [2024-10-01 15:59:02.804495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.818 [2024-10-01 15:59:02.804502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.818 [2024-10-01 15:59:02.804509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.818 [2024-10-01 15:59:02.804518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.818 [2024-10-01 15:59:02.804524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.818 [2024-10-01 15:59:02.804530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.818 [2024-10-01 15:59:02.804543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.818 [2024-10-01 15:59:02.804550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.818 [2024-10-01 15:59:02.814774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.818 [2024-10-01 15:59:02.814795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.818 [2024-10-01 15:59:02.815012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.818 [2024-10-01 15:59:02.815026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.818 [2024-10-01 15:59:02.815034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.818 [2024-10-01 15:59:02.815138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.818 [2024-10-01 15:59:02.815148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.818 [2024-10-01 15:59:02.815155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.818 [2024-10-01 15:59:02.815854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.818 [2024-10-01 15:59:02.815874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.818 [2024-10-01 15:59:02.816342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.818 [2024-10-01 15:59:02.816353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.818 [2024-10-01 15:59:02.816360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.818 [2024-10-01 15:59:02.816370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.818 [2024-10-01 15:59:02.816376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.818 [2024-10-01 15:59:02.816382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.819 [2024-10-01 15:59:02.816681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.819 [2024-10-01 15:59:02.816691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.819 [2024-10-01 15:59:02.826306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.819 [2024-10-01 15:59:02.826328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.819 [2024-10-01 15:59:02.826561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.819 [2024-10-01 15:59:02.826574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.819 [2024-10-01 15:59:02.826582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.819 [2024-10-01 15:59:02.826776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.819 [2024-10-01 15:59:02.826787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.819 [2024-10-01 15:59:02.826794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.819 [2024-10-01 15:59:02.826806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.819 [2024-10-01 15:59:02.826816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.819 [2024-10-01 15:59:02.826825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.819 [2024-10-01 15:59:02.826832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.819 [2024-10-01 15:59:02.826838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.819 [2024-10-01 15:59:02.826846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.819 [2024-10-01 15:59:02.826852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.819 [2024-10-01 15:59:02.826859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.819 [2024-10-01 15:59:02.826877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.819 [2024-10-01 15:59:02.826884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.819 [2024-10-01 15:59:02.838183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.819 [2024-10-01 15:59:02.838204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.819 [2024-10-01 15:59:02.838313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.819 [2024-10-01 15:59:02.838326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.819 [2024-10-01 15:59:02.838333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.819 [2024-10-01 15:59:02.838549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.819 [2024-10-01 15:59:02.838559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.819 [2024-10-01 15:59:02.838565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.819 [2024-10-01 15:59:02.838577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.819 [2024-10-01 15:59:02.838586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.819 [2024-10-01 15:59:02.838595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.819 [2024-10-01 15:59:02.838601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.819 [2024-10-01 15:59:02.838607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.819 [2024-10-01 15:59:02.838615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.819 [2024-10-01 15:59:02.838621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.819 [2024-10-01 15:59:02.838627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.819 [2024-10-01 15:59:02.838641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.819 [2024-10-01 15:59:02.838647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.819 [2024-10-01 15:59:02.851022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.819 [2024-10-01 15:59:02.851043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.819 [2024-10-01 15:59:02.851284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.819 [2024-10-01 15:59:02.851304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.819 [2024-10-01 15:59:02.851311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.819 [2024-10-01 15:59:02.851504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.819 [2024-10-01 15:59:02.851515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.819 [2024-10-01 15:59:02.851522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.819 [2024-10-01 15:59:02.851533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.819 [2024-10-01 15:59:02.851542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.819 [2024-10-01 15:59:02.851552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.819 [2024-10-01 15:59:02.851558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.819 [2024-10-01 15:59:02.851567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.819 [2024-10-01 15:59:02.851576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.819 [2024-10-01 15:59:02.851582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.819 [2024-10-01 15:59:02.851588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.819 [2024-10-01 15:59:02.851601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.819 [2024-10-01 15:59:02.851607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.819 [2024-10-01 15:59:02.862411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.819 [2024-10-01 15:59:02.862432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.819 [2024-10-01 15:59:02.862683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.819 [2024-10-01 15:59:02.862697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.819 [2024-10-01 15:59:02.862705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.819 [2024-10-01 15:59:02.862920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.819 [2024-10-01 15:59:02.862932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.819 [2024-10-01 15:59:02.862939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.819 [2024-10-01 15:59:02.862951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.819 [2024-10-01 15:59:02.862960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.819 [2024-10-01 15:59:02.862969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.819 [2024-10-01 15:59:02.862976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.819 [2024-10-01 15:59:02.862982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.819 [2024-10-01 15:59:02.862990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.819 [2024-10-01 15:59:02.862996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.819 [2024-10-01 15:59:02.863002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.819 [2024-10-01 15:59:02.863016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.819 [2024-10-01 15:59:02.863022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.819 [2024-10-01 15:59:02.872992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.819 [2024-10-01 15:59:02.873012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.819 [2024-10-01 15:59:02.873199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.819 [2024-10-01 15:59:02.873211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.819 [2024-10-01 15:59:02.873218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.819 [2024-10-01 15:59:02.873384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.819 [2024-10-01 15:59:02.873393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.819 [2024-10-01 15:59:02.873403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.819 [2024-10-01 15:59:02.873415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.819 [2024-10-01 15:59:02.873423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.819 [2024-10-01 15:59:02.873433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.819 [2024-10-01 15:59:02.873440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.819 [2024-10-01 15:59:02.873446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.819 [2024-10-01 15:59:02.873454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.819 [2024-10-01 15:59:02.873460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.819 [2024-10-01 15:59:02.873466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.819 [2024-10-01 15:59:02.873479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.819 [2024-10-01 15:59:02.873486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.819 [2024-10-01 15:59:02.884857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.820 [2024-10-01 15:59:02.884883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.820 [2024-10-01 15:59:02.885041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.820 [2024-10-01 15:59:02.885053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.820 [2024-10-01 15:59:02.885060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.820 [2024-10-01 15:59:02.885229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.820 [2024-10-01 15:59:02.885240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.820 [2024-10-01 15:59:02.885247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.820 [2024-10-01 15:59:02.885259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.820 [2024-10-01 15:59:02.885268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.820 [2024-10-01 15:59:02.885278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.820 [2024-10-01 15:59:02.885284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.820 [2024-10-01 15:59:02.885290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.820 [2024-10-01 15:59:02.885299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.820 [2024-10-01 15:59:02.885305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.820 [2024-10-01 15:59:02.885311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.820 [2024-10-01 15:59:02.885324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.820 [2024-10-01 15:59:02.885330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.820 [2024-10-01 15:59:02.895699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.820 [2024-10-01 15:59:02.895723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.820 [2024-10-01 15:59:02.895887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.820 [2024-10-01 15:59:02.895900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.820 [2024-10-01 15:59:02.895908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.820 [2024-10-01 15:59:02.896100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.820 [2024-10-01 15:59:02.896110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.820 [2024-10-01 15:59:02.896117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.820 [2024-10-01 15:59:02.896129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.820 [2024-10-01 15:59:02.896138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.820 [2024-10-01 15:59:02.896148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.820 [2024-10-01 15:59:02.896154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.820 [2024-10-01 15:59:02.896161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.820 [2024-10-01 15:59:02.896169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.820 [2024-10-01 15:59:02.896175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.820 [2024-10-01 15:59:02.896181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.820 [2024-10-01 15:59:02.896194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.820 [2024-10-01 15:59:02.896201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.820 [2024-10-01 15:59:02.907576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.820 [2024-10-01 15:59:02.907598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.820 [2024-10-01 15:59:02.907757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.820 [2024-10-01 15:59:02.907770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.820 [2024-10-01 15:59:02.907778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.820 [2024-10-01 15:59:02.907915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.820 [2024-10-01 15:59:02.907926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.820 [2024-10-01 15:59:02.907933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.820 [2024-10-01 15:59:02.907945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.820 [2024-10-01 15:59:02.907955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.820 [2024-10-01 15:59:02.907965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.820 [2024-10-01 15:59:02.907972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.820 [2024-10-01 15:59:02.907978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.820 [2024-10-01 15:59:02.907990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.820 [2024-10-01 15:59:02.907996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.820 [2024-10-01 15:59:02.908002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.820 [2024-10-01 15:59:02.908016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.820 [2024-10-01 15:59:02.908023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.820 [2024-10-01 15:59:02.918177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.820 [2024-10-01 15:59:02.918198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.820 [2024-10-01 15:59:02.918370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.820 [2024-10-01 15:59:02.918383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.820 [2024-10-01 15:59:02.918391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.820 [2024-10-01 15:59:02.918551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.820 [2024-10-01 15:59:02.918561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.820 [2024-10-01 15:59:02.918569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.820 [2024-10-01 15:59:02.918581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.820 [2024-10-01 15:59:02.918590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.820 [2024-10-01 15:59:02.918600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.820 [2024-10-01 15:59:02.918607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.820 [2024-10-01 15:59:02.918614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.820 [2024-10-01 15:59:02.918623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.820 [2024-10-01 15:59:02.918629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.820 [2024-10-01 15:59:02.918635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.820 [2024-10-01 15:59:02.918648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.820 [2024-10-01 15:59:02.918655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.820 [2024-10-01 15:59:02.931345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.820 [2024-10-01 15:59:02.931366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.820 [2024-10-01 15:59:02.931601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.820 [2024-10-01 15:59:02.931614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.820 [2024-10-01 15:59:02.931622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.820 [2024-10-01 15:59:02.931838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.820 [2024-10-01 15:59:02.931849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.820 [2024-10-01 15:59:02.931857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.820 [2024-10-01 15:59:02.931878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.820 [2024-10-01 15:59:02.931888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.820 [2024-10-01 15:59:02.931898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.820 [2024-10-01 15:59:02.931905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.820 [2024-10-01 15:59:02.931911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.820 [2024-10-01 15:59:02.931920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.820 [2024-10-01 15:59:02.931925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.820 [2024-10-01 15:59:02.931931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.820 [2024-10-01 15:59:02.931945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.820 [2024-10-01 15:59:02.931952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.820 [2024-10-01 15:59:02.942354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.820 [2024-10-01 15:59:02.942375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.820 [2024-10-01 15:59:02.942607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.820 [2024-10-01 15:59:02.942619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.821 [2024-10-01 15:59:02.942627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.821 [2024-10-01 15:59:02.942766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.821 [2024-10-01 15:59:02.942775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.821 [2024-10-01 15:59:02.942782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.821 [2024-10-01 15:59:02.942794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.821 [2024-10-01 15:59:02.942803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.821 [2024-10-01 15:59:02.942813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.821 [2024-10-01 15:59:02.942819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.821 [2024-10-01 15:59:02.942826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.821 [2024-10-01 15:59:02.942834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.821 [2024-10-01 15:59:02.942840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.821 [2024-10-01 15:59:02.942847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.821 [2024-10-01 15:59:02.942860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.821 [2024-10-01 15:59:02.942873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.821 [2024-10-01 15:59:02.953248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.821 [2024-10-01 15:59:02.953269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.821 [2024-10-01 15:59:02.953487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.821 [2024-10-01 15:59:02.953499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.821 [2024-10-01 15:59:02.953507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.821 [2024-10-01 15:59:02.953722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.821 [2024-10-01 15:59:02.953732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.821 [2024-10-01 15:59:02.953739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.821 [2024-10-01 15:59:02.953750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.821 [2024-10-01 15:59:02.953760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.821 [2024-10-01 15:59:02.953769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.821 [2024-10-01 15:59:02.953775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.821 [2024-10-01 15:59:02.953782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.821 [2024-10-01 15:59:02.953791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.821 [2024-10-01 15:59:02.953797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.821 [2024-10-01 15:59:02.953803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.821 [2024-10-01 15:59:02.953816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.821 [2024-10-01 15:59:02.953823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.821 [2024-10-01 15:59:02.964188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.821 [2024-10-01 15:59:02.964210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.821 [2024-10-01 15:59:02.964409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.821 [2024-10-01 15:59:02.964422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.821 [2024-10-01 15:59:02.964429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.821 [2024-10-01 15:59:02.964572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.821 [2024-10-01 15:59:02.964582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.821 [2024-10-01 15:59:02.964589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.821 [2024-10-01 15:59:02.964600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.821 [2024-10-01 15:59:02.964609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.821 [2024-10-01 15:59:02.964619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.821 [2024-10-01 15:59:02.964625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.821 [2024-10-01 15:59:02.964632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.821 [2024-10-01 15:59:02.964640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.821 [2024-10-01 15:59:02.964650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.821 [2024-10-01 15:59:02.964656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.821 [2024-10-01 15:59:02.964669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.821 [2024-10-01 15:59:02.964675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.821 [2024-10-01 15:59:02.975046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.821 [2024-10-01 15:59:02.975068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.821 [2024-10-01 15:59:02.975325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.821 [2024-10-01 15:59:02.975338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.821 [2024-10-01 15:59:02.975345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.821 [2024-10-01 15:59:02.975488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.821 [2024-10-01 15:59:02.975497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.821 [2024-10-01 15:59:02.975504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.821 [2024-10-01 15:59:02.975515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.821 [2024-10-01 15:59:02.975525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.821 [2024-10-01 15:59:02.975535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.821 [2024-10-01 15:59:02.975541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.821 [2024-10-01 15:59:02.975547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.821 [2024-10-01 15:59:02.975555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.821 [2024-10-01 15:59:02.975561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.821 [2024-10-01 15:59:02.975567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.821 [2024-10-01 15:59:02.975581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.821 [2024-10-01 15:59:02.975587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.821 [2024-10-01 15:59:02.987415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.821 [2024-10-01 15:59:02.987437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.821 [2024-10-01 15:59:02.987701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.821 [2024-10-01 15:59:02.987715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.821 [2024-10-01 15:59:02.987722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.821 [2024-10-01 15:59:02.987918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.821 [2024-10-01 15:59:02.987929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.821 [2024-10-01 15:59:02.987936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.821 [2024-10-01 15:59:02.988774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.821 [2024-10-01 15:59:02.988792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.821 [2024-10-01 15:59:02.989144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.821 [2024-10-01 15:59:02.989156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.821 [2024-10-01 15:59:02.989163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.821 [2024-10-01 15:59:02.989173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.821 [2024-10-01 15:59:02.989179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.821 [2024-10-01 15:59:02.989185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.821 [2024-10-01 15:59:02.989340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.821 [2024-10-01 15:59:02.989349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.821 11355.82 IOPS, 44.36 MiB/s [2024-10-01 15:59:02.999765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.821 [2024-10-01 15:59:02.999783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.821 [2024-10-01 15:59:03.000053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.821 [2024-10-01 15:59:03.000067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.821 [2024-10-01 15:59:03.000075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.821 [2024-10-01 15:59:03.000290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.822 [2024-10-01 15:59:03.000301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.822 [2024-10-01 15:59:03.000307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.822 [2024-10-01 15:59:03.000320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.822 [2024-10-01 15:59:03.000329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.822 [2024-10-01 15:59:03.000339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.822 [2024-10-01 15:59:03.000345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.822 [2024-10-01 15:59:03.000351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.822 [2024-10-01 15:59:03.000359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.822 [2024-10-01 15:59:03.000365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.822 [2024-10-01 15:59:03.000372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.822 [2024-10-01 15:59:03.000385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.822 [2024-10-01 15:59:03.000391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.822 [2024-10-01 15:59:03.010566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.822 [2024-10-01 15:59:03.010588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.822 [2024-10-01 15:59:03.010827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.822 [2024-10-01 15:59:03.010851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.822 [2024-10-01 15:59:03.010859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.822 [2024-10-01 15:59:03.011009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.822 [2024-10-01 15:59:03.011019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.822 [2024-10-01 15:59:03.011026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.822 [2024-10-01 15:59:03.011038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.822 [2024-10-01 15:59:03.011047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.822 [2024-10-01 15:59:03.011057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.822 [2024-10-01 15:59:03.011063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.822 [2024-10-01 15:59:03.011069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.822 [2024-10-01 15:59:03.011078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.822 [2024-10-01 15:59:03.011084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.822 [2024-10-01 15:59:03.011090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.822 [2024-10-01 15:59:03.011104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.822 [2024-10-01 15:59:03.011110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.822 [2024-10-01 15:59:03.021910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.822 [2024-10-01 15:59:03.021931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.822 [2024-10-01 15:59:03.022175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.822 [2024-10-01 15:59:03.022188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.822 [2024-10-01 15:59:03.022196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.822 [2024-10-01 15:59:03.022387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.822 [2024-10-01 15:59:03.022397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.822 [2024-10-01 15:59:03.022404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.822 [2024-10-01 15:59:03.022416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.822 [2024-10-01 15:59:03.022425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.822 [2024-10-01 15:59:03.022435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.822 [2024-10-01 15:59:03.022442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.822 [2024-10-01 15:59:03.022448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.822 [2024-10-01 15:59:03.022456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.822 [2024-10-01 15:59:03.022462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.822 [2024-10-01 15:59:03.022472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.822 [2024-10-01 15:59:03.022485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.822 [2024-10-01 15:59:03.022492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.822 [2024-10-01 15:59:03.033597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.822 [2024-10-01 15:59:03.033618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.822 [2024-10-01 15:59:03.033853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.822 [2024-10-01 15:59:03.033870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.822 [2024-10-01 15:59:03.033878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.822 [2024-10-01 15:59:03.034016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.822 [2024-10-01 15:59:03.034026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.822 [2024-10-01 15:59:03.034033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.822 [2024-10-01 15:59:03.034044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.822 [2024-10-01 15:59:03.034053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.822 [2024-10-01 15:59:03.034064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.822 [2024-10-01 15:59:03.034070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.822 [2024-10-01 15:59:03.034077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.822 [2024-10-01 15:59:03.034085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.822 [2024-10-01 15:59:03.034091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.822 [2024-10-01 15:59:03.034097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.822 [2024-10-01 15:59:03.034110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.822 [2024-10-01 15:59:03.034117] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.822 [2024-10-01 15:59:03.045052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.822 [2024-10-01 15:59:03.045073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.822 [2024-10-01 15:59:03.045256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.822 [2024-10-01 15:59:03.045270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.822 [2024-10-01 15:59:03.045277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.822 [2024-10-01 15:59:03.045493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.822 [2024-10-01 15:59:03.045503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.822 [2024-10-01 15:59:03.045509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.822 [2024-10-01 15:59:03.045588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.822 [2024-10-01 15:59:03.045601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.822 [2024-10-01 15:59:03.045677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.822 [2024-10-01 15:59:03.045684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.822 [2024-10-01 15:59:03.045690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.822 [2024-10-01 15:59:03.045699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.822 [2024-10-01 15:59:03.045705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.822 [2024-10-01 15:59:03.045711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.822 [2024-10-01 15:59:03.047416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.823 [2024-10-01 15:59:03.047433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.823 [2024-10-01 15:59:03.057279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.823 [2024-10-01 15:59:03.057301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.823 [2024-10-01 15:59:03.057857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.823 [2024-10-01 15:59:03.057879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.823 [2024-10-01 15:59:03.057887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.823 [2024-10-01 15:59:03.058157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.823 [2024-10-01 15:59:03.058166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.823 [2024-10-01 15:59:03.058173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.823 [2024-10-01 15:59:03.058446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.823 [2024-10-01 15:59:03.058458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.823 [2024-10-01 15:59:03.058494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.823 [2024-10-01 15:59:03.058501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.823 [2024-10-01 15:59:03.058508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.823 [2024-10-01 15:59:03.058517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.823 [2024-10-01 15:59:03.058523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.823 [2024-10-01 15:59:03.058529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.823 [2024-10-01 15:59:03.058542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.823 [2024-10-01 15:59:03.058548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.823 [2024-10-01 15:59:03.067360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.823 [2024-10-01 15:59:03.067391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.823 [2024-10-01 15:59:03.067545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.823 [2024-10-01 15:59:03.067558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.823 [2024-10-01 15:59:03.067569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.823 [2024-10-01 15:59:03.067791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.823 [2024-10-01 15:59:03.067803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.823 [2024-10-01 15:59:03.067810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.823 [2024-10-01 15:59:03.067818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.823 [2024-10-01 15:59:03.067830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.823 [2024-10-01 15:59:03.067838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.823 [2024-10-01 15:59:03.067844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.823 [2024-10-01 15:59:03.067850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.823 [2024-10-01 15:59:03.067868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.823 [2024-10-01 15:59:03.067875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.823 [2024-10-01 15:59:03.067881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.823 [2024-10-01 15:59:03.067887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.823 [2024-10-01 15:59:03.067898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.823 [2024-10-01 15:59:03.077426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.823 [2024-10-01 15:59:03.077674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.823 [2024-10-01 15:59:03.077690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.823 [2024-10-01 15:59:03.077698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.823 [2024-10-01 15:59:03.077718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.823 [2024-10-01 15:59:03.077731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.823 [2024-10-01 15:59:03.077743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.823 [2024-10-01 15:59:03.077750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.823 [2024-10-01 15:59:03.077757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.823 [2024-10-01 15:59:03.077768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.823 [2024-10-01 15:59:03.077915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.823 [2024-10-01 15:59:03.077927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.823 [2024-10-01 15:59:03.077933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.823 [2024-10-01 15:59:03.078118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.823 [2024-10-01 15:59:03.078149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.823 [2024-10-01 15:59:03.078157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.823 [2024-10-01 15:59:03.078167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.823 [2024-10-01 15:59:03.078181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.823 [2024-10-01 15:59:03.089079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.823 [2024-10-01 15:59:03.089101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.823 [2024-10-01 15:59:03.089455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.823 [2024-10-01 15:59:03.089471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.823 [2024-10-01 15:59:03.089478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.823 [2024-10-01 15:59:03.089681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.823 [2024-10-01 15:59:03.089691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.823 [2024-10-01 15:59:03.089698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.823 [2024-10-01 15:59:03.089956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.823 [2024-10-01 15:59:03.089971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.823 [2024-10-01 15:59:03.090120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.823 [2024-10-01 15:59:03.090130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.823 [2024-10-01 15:59:03.090137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.823 [2024-10-01 15:59:03.090146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.823 [2024-10-01 15:59:03.090152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.823 [2024-10-01 15:59:03.090158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.823 [2024-10-01 15:59:03.090188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.823 [2024-10-01 15:59:03.090195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.823 [2024-10-01 15:59:03.100268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.823 [2024-10-01 15:59:03.100290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.823 [2024-10-01 15:59:03.100558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.823 [2024-10-01 15:59:03.100571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.823 [2024-10-01 15:59:03.100578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.823 [2024-10-01 15:59:03.100671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.823 [2024-10-01 15:59:03.100680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.823 [2024-10-01 15:59:03.100687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.823 [2024-10-01 15:59:03.100815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.823 [2024-10-01 15:59:03.100827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.823 [2024-10-01 15:59:03.100988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.823 [2024-10-01 15:59:03.100998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.823 [2024-10-01 15:59:03.101005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.823 [2024-10-01 15:59:03.101014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.823 [2024-10-01 15:59:03.101020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.823 [2024-10-01 15:59:03.101026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.823 [2024-10-01 15:59:03.101168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.823 [2024-10-01 15:59:03.101179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.823 [2024-10-01 15:59:03.111021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.823 [2024-10-01 15:59:03.111042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.824 [2024-10-01 15:59:03.111253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.824 [2024-10-01 15:59:03.111267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.824 [2024-10-01 15:59:03.111275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.824 [2024-10-01 15:59:03.111423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.824 [2024-10-01 15:59:03.111433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.824 [2024-10-01 15:59:03.111440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.824 [2024-10-01 15:59:03.111451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.824 [2024-10-01 15:59:03.111460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.824 [2024-10-01 15:59:03.111469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.824 [2024-10-01 15:59:03.111476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.824 [2024-10-01 15:59:03.111482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.824 [2024-10-01 15:59:03.111490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.824 [2024-10-01 15:59:03.111496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.824 [2024-10-01 15:59:03.111502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.824 [2024-10-01 15:59:03.111516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.824 [2024-10-01 15:59:03.111522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.824 [2024-10-01 15:59:03.122169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.824 [2024-10-01 15:59:03.122191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.824 [2024-10-01 15:59:03.122307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.824 [2024-10-01 15:59:03.122320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.824 [2024-10-01 15:59:03.122327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.824 [2024-10-01 15:59:03.122549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.824 [2024-10-01 15:59:03.122559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.824 [2024-10-01 15:59:03.122566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.824 [2024-10-01 15:59:03.122577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.824 [2024-10-01 15:59:03.122586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.824 [2024-10-01 15:59:03.122596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.824 [2024-10-01 15:59:03.122602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.824 [2024-10-01 15:59:03.122608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.824 [2024-10-01 15:59:03.122616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.824 [2024-10-01 15:59:03.122622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.824 [2024-10-01 15:59:03.122628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.824 [2024-10-01 15:59:03.122641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.824 [2024-10-01 15:59:03.122648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.824 [2024-10-01 15:59:03.133368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.824 [2024-10-01 15:59:03.133389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.824 [2024-10-01 15:59:03.133576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.824 [2024-10-01 15:59:03.133588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.824 [2024-10-01 15:59:03.133596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.824 [2024-10-01 15:59:03.133810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.824 [2024-10-01 15:59:03.133820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.824 [2024-10-01 15:59:03.133827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.824 [2024-10-01 15:59:03.134484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.824 [2024-10-01 15:59:03.134499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.824 [2024-10-01 15:59:03.134635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.824 [2024-10-01 15:59:03.134643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.824 [2024-10-01 15:59:03.134649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.824 [2024-10-01 15:59:03.134658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.824 [2024-10-01 15:59:03.134663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.824 [2024-10-01 15:59:03.134670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.824 [2024-10-01 15:59:03.135491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.824 [2024-10-01 15:59:03.135508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.824 [2024-10-01 15:59:03.143808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.824 [2024-10-01 15:59:03.143829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.824 [2024-10-01 15:59:03.144083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.824 [2024-10-01 15:59:03.144097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.824 [2024-10-01 15:59:03.144104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.824 [2024-10-01 15:59:03.144317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.824 [2024-10-01 15:59:03.144327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.824 [2024-10-01 15:59:03.144334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.824 [2024-10-01 15:59:03.144527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.824 [2024-10-01 15:59:03.144539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.824 [2024-10-01 15:59:03.144572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.824 [2024-10-01 15:59:03.144580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.824 [2024-10-01 15:59:03.144586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.824 [2024-10-01 15:59:03.144596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.824 [2024-10-01 15:59:03.144601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.824 [2024-10-01 15:59:03.144607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.824 [2024-10-01 15:59:03.144621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.824 [2024-10-01 15:59:03.144627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.824 [2024-10-01 15:59:03.154646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.824 [2024-10-01 15:59:03.154668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.824 [2024-10-01 15:59:03.154850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.824 [2024-10-01 15:59:03.154869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.824 [2024-10-01 15:59:03.154877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.824 [2024-10-01 15:59:03.154972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.824 [2024-10-01 15:59:03.154982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.824 [2024-10-01 15:59:03.154988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.824 [2024-10-01 15:59:03.155000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.824 [2024-10-01 15:59:03.155009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.824 [2024-10-01 15:59:03.155019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.824 [2024-10-01 15:59:03.155025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.824 [2024-10-01 15:59:03.155041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.824 [2024-10-01 15:59:03.155050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.824 [2024-10-01 15:59:03.155055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.824 [2024-10-01 15:59:03.155061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.824 [2024-10-01 15:59:03.155176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.824 [2024-10-01 15:59:03.155186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.824 [2024-10-01 15:59:03.165536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.824 [2024-10-01 15:59:03.165558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.824 [2024-10-01 15:59:03.165739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.824 [2024-10-01 15:59:03.165752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.824 [2024-10-01 15:59:03.165760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.825 [2024-10-01 15:59:03.165985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.825 [2024-10-01 15:59:03.165997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.825 [2024-10-01 15:59:03.166003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.825 [2024-10-01 15:59:03.166016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.825 [2024-10-01 15:59:03.166025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.825 [2024-10-01 15:59:03.166035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.825 [2024-10-01 15:59:03.166040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.825 [2024-10-01 15:59:03.166047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.825 [2024-10-01 15:59:03.166055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.825 [2024-10-01 15:59:03.166061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.825 [2024-10-01 15:59:03.166066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.825 [2024-10-01 15:59:03.166080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.825 [2024-10-01 15:59:03.166087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.825 [2024-10-01 15:59:03.176094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.825 [2024-10-01 15:59:03.176115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.825 [2024-10-01 15:59:03.176341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.825 [2024-10-01 15:59:03.176357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.825 [2024-10-01 15:59:03.176367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.825 [2024-10-01 15:59:03.176636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.825 [2024-10-01 15:59:03.176651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.825 [2024-10-01 15:59:03.176658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.825 [2024-10-01 15:59:03.177017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.825 [2024-10-01 15:59:03.177033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.825 [2024-10-01 15:59:03.177083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.825 [2024-10-01 15:59:03.177091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.825 [2024-10-01 15:59:03.177098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.825 [2024-10-01 15:59:03.177107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.825 [2024-10-01 15:59:03.177113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.825 [2024-10-01 15:59:03.177119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.825 [2024-10-01 15:59:03.177309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.825 [2024-10-01 15:59:03.177321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.825 [2024-10-01 15:59:03.187653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.825 [2024-10-01 15:59:03.187675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.825 [2024-10-01 15:59:03.187906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.825 [2024-10-01 15:59:03.187920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.825 [2024-10-01 15:59:03.187927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.825 [2024-10-01 15:59:03.188077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.825 [2024-10-01 15:59:03.188087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.825 [2024-10-01 15:59:03.188093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.825 [2024-10-01 15:59:03.188392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.825 [2024-10-01 15:59:03.188407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.825 [2024-10-01 15:59:03.188559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.825 [2024-10-01 15:59:03.188569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.825 [2024-10-01 15:59:03.188576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.825 [2024-10-01 15:59:03.188585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.825 [2024-10-01 15:59:03.188591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.825 [2024-10-01 15:59:03.188597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.825 [2024-10-01 15:59:03.188627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.825 [2024-10-01 15:59:03.188635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.825 [2024-10-01 15:59:03.198827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.825 [2024-10-01 15:59:03.198848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.825 [2024-10-01 15:59:03.199043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.825 [2024-10-01 15:59:03.199056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.825 [2024-10-01 15:59:03.199063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.825 [2024-10-01 15:59:03.199159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.825 [2024-10-01 15:59:03.199169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.825 [2024-10-01 15:59:03.199175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.825 [2024-10-01 15:59:03.199514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.825 [2024-10-01 15:59:03.199528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.825 [2024-10-01 15:59:03.199685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.825 [2024-10-01 15:59:03.199695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.825 [2024-10-01 15:59:03.199702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.825 [2024-10-01 15:59:03.199711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.825 [2024-10-01 15:59:03.199717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.825 [2024-10-01 15:59:03.199724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.825 [2024-10-01 15:59:03.199908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.825 [2024-10-01 15:59:03.199919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.825 [2024-10-01 15:59:03.210246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.825 [2024-10-01 15:59:03.210267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.825 [2024-10-01 15:59:03.210425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.825 [2024-10-01 15:59:03.210437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.825 [2024-10-01 15:59:03.210445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.825 [2024-10-01 15:59:03.210540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.825 [2024-10-01 15:59:03.210549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.825 [2024-10-01 15:59:03.210556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.825 [2024-10-01 15:59:03.210567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.825 [2024-10-01 15:59:03.210576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.825 [2024-10-01 15:59:03.210585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.825 [2024-10-01 15:59:03.210592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.825 [2024-10-01 15:59:03.210602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.825 [2024-10-01 15:59:03.210611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.825 [2024-10-01 15:59:03.210616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.825 [2024-10-01 15:59:03.210622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.825 [2024-10-01 15:59:03.210635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.825 [2024-10-01 15:59:03.210642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.825 [2024-10-01 15:59:03.221568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.825 [2024-10-01 15:59:03.221590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.825 [2024-10-01 15:59:03.221710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.825 [2024-10-01 15:59:03.221722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.825 [2024-10-01 15:59:03.221729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.825 [2024-10-01 15:59:03.221811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.825 [2024-10-01 15:59:03.221821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.826 [2024-10-01 15:59:03.221828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.826 [2024-10-01 15:59:03.221839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.826 [2024-10-01 15:59:03.221848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.826 [2024-10-01 15:59:03.221858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.826 [2024-10-01 15:59:03.221871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.826 [2024-10-01 15:59:03.221878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.826 [2024-10-01 15:59:03.221886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.826 [2024-10-01 15:59:03.221891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.826 [2024-10-01 15:59:03.221897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.826 [2024-10-01 15:59:03.221911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.826 [2024-10-01 15:59:03.221917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.826 [2024-10-01 15:59:03.233241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.826 [2024-10-01 15:59:03.233262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.826 [2024-10-01 15:59:03.233587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.826 [2024-10-01 15:59:03.233603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.826 [2024-10-01 15:59:03.233611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.826 [2024-10-01 15:59:03.233782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.826 [2024-10-01 15:59:03.233792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.826 [2024-10-01 15:59:03.233803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.826 [2024-10-01 15:59:03.233986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.826 [2024-10-01 15:59:03.234001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.826 [2024-10-01 15:59:03.234141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.826 [2024-10-01 15:59:03.234151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.826 [2024-10-01 15:59:03.234158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.826 [2024-10-01 15:59:03.234167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.826 [2024-10-01 15:59:03.234173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.826 [2024-10-01 15:59:03.234179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.826 [2024-10-01 15:59:03.234209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.826 [2024-10-01 15:59:03.234217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.826 [2024-10-01 15:59:03.244399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.826 [2024-10-01 15:59:03.244421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.826 [2024-10-01 15:59:03.244523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.826 [2024-10-01 15:59:03.244536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.826 [2024-10-01 15:59:03.244544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.826 [2024-10-01 15:59:03.244681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.826 [2024-10-01 15:59:03.244691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.826 [2024-10-01 15:59:03.244697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.826 [2024-10-01 15:59:03.244848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.826 [2024-10-01 15:59:03.244861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.826 [2024-10-01 15:59:03.244957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.826 [2024-10-01 15:59:03.244966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.826 [2024-10-01 15:59:03.244972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.826 [2024-10-01 15:59:03.244982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.826 [2024-10-01 15:59:03.244988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.826 [2024-10-01 15:59:03.244993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.826 [2024-10-01 15:59:03.245109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.826 [2024-10-01 15:59:03.245118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.826 [2024-10-01 15:59:03.254714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.826 [2024-10-01 15:59:03.254735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.826 [2024-10-01 15:59:03.254833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.826 [2024-10-01 15:59:03.254845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.826 [2024-10-01 15:59:03.254853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.826 [2024-10-01 15:59:03.254957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.826 [2024-10-01 15:59:03.254967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.826 [2024-10-01 15:59:03.254973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.826 [2024-10-01 15:59:03.254985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.826 [2024-10-01 15:59:03.254994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.826 [2024-10-01 15:59:03.255004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.826 [2024-10-01 15:59:03.255009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.826 [2024-10-01 15:59:03.255016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.826 [2024-10-01 15:59:03.255025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.826 [2024-10-01 15:59:03.255031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.826 [2024-10-01 15:59:03.255038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.826 [2024-10-01 15:59:03.255051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.826 [2024-10-01 15:59:03.255057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.826 [2024-10-01 15:59:03.265600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.826 [2024-10-01 15:59:03.265622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.826 [2024-10-01 15:59:03.265788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.826 [2024-10-01 15:59:03.265800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.826 [2024-10-01 15:59:03.265808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.826 [2024-10-01 15:59:03.265884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.826 [2024-10-01 15:59:03.265894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.826 [2024-10-01 15:59:03.265902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.826 [2024-10-01 15:59:03.265913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.826 [2024-10-01 15:59:03.265922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.826 [2024-10-01 15:59:03.265932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.826 [2024-10-01 15:59:03.265938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.826 [2024-10-01 15:59:03.265945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.826 [2024-10-01 15:59:03.265953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.826 [2024-10-01 15:59:03.265963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.826 [2024-10-01 15:59:03.265969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.826 [2024-10-01 15:59:03.265982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.827 [2024-10-01 15:59:03.265989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.827 [2024-10-01 15:59:03.276543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.827 [2024-10-01 15:59:03.276565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.827 [2024-10-01 15:59:03.276693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.827 [2024-10-01 15:59:03.276706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.827 [2024-10-01 15:59:03.276714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.827 [2024-10-01 15:59:03.276927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.827 [2024-10-01 15:59:03.276938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.827 [2024-10-01 15:59:03.276945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.827 [2024-10-01 15:59:03.276956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.827 [2024-10-01 15:59:03.276965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.827 [2024-10-01 15:59:03.276975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.827 [2024-10-01 15:59:03.276981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.827 [2024-10-01 15:59:03.276987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.827 [2024-10-01 15:59:03.276996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.827 [2024-10-01 15:59:03.277002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.827 [2024-10-01 15:59:03.277008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.827 [2024-10-01 15:59:03.277022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.827 [2024-10-01 15:59:03.277028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.827 [2024-10-01 15:59:03.287811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.827 [2024-10-01 15:59:03.287832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.827 [2024-10-01 15:59:03.288002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.827 [2024-10-01 15:59:03.288014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.827 [2024-10-01 15:59:03.288022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.827 [2024-10-01 15:59:03.288173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.827 [2024-10-01 15:59:03.288183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.827 [2024-10-01 15:59:03.288189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.827 [2024-10-01 15:59:03.288902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.827 [2024-10-01 15:59:03.288918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.827 [2024-10-01 15:59:03.289695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.827 [2024-10-01 15:59:03.289708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.827 [2024-10-01 15:59:03.289715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.827 [2024-10-01 15:59:03.289724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.827 [2024-10-01 15:59:03.289730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.827 [2024-10-01 15:59:03.289736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.827 [2024-10-01 15:59:03.290031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.827 [2024-10-01 15:59:03.290041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.827 [2024-10-01 15:59:03.299436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.827 [2024-10-01 15:59:03.299457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.827 [2024-10-01 15:59:03.299607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.827 [2024-10-01 15:59:03.299621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.827 [2024-10-01 15:59:03.299628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.827 [2024-10-01 15:59:03.299722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.827 [2024-10-01 15:59:03.299731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.827 [2024-10-01 15:59:03.299738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.827 [2024-10-01 15:59:03.299749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.827 [2024-10-01 15:59:03.299758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.827 [2024-10-01 15:59:03.299768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.827 [2024-10-01 15:59:03.299775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.827 [2024-10-01 15:59:03.299781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.827 [2024-10-01 15:59:03.299790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.827 [2024-10-01 15:59:03.299796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.827 [2024-10-01 15:59:03.299802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.827 [2024-10-01 15:59:03.299815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.827 [2024-10-01 15:59:03.299823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.827 [2024-10-01 15:59:03.310595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.827 [2024-10-01 15:59:03.310619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.827 [2024-10-01 15:59:03.311135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.827 [2024-10-01 15:59:03.311157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.827 [2024-10-01 15:59:03.311165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.827 [2024-10-01 15:59:03.311239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.827 [2024-10-01 15:59:03.311248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.827 [2024-10-01 15:59:03.311255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.827 [2024-10-01 15:59:03.311421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.827 [2024-10-01 15:59:03.311434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.827 [2024-10-01 15:59:03.311583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.827 [2024-10-01 15:59:03.311593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.827 [2024-10-01 15:59:03.311600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.827 [2024-10-01 15:59:03.311609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.827 [2024-10-01 15:59:03.311615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.827 [2024-10-01 15:59:03.311621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.827 [2024-10-01 15:59:03.311650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.827 [2024-10-01 15:59:03.311658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.827 [2024-10-01 15:59:03.321912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.827 [2024-10-01 15:59:03.321934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.827 [2024-10-01 15:59:03.322435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.827 [2024-10-01 15:59:03.322453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.827 [2024-10-01 15:59:03.322461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.827 [2024-10-01 15:59:03.322690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.827 [2024-10-01 15:59:03.322701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.827 [2024-10-01 15:59:03.322708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.827 [2024-10-01 15:59:03.323183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.827 [2024-10-01 15:59:03.323199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.827 [2024-10-01 15:59:03.323367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.827 [2024-10-01 15:59:03.323377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.827 [2024-10-01 15:59:03.323384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.827 [2024-10-01 15:59:03.323394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.827 [2024-10-01 15:59:03.323399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.827 [2024-10-01 15:59:03.323409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.827 [2024-10-01 15:59:03.323519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.827 [2024-10-01 15:59:03.323529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.827 [2024-10-01 15:59:03.333337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.827 [2024-10-01 15:59:03.333358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.828 [2024-10-01 15:59:03.333473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.828 [2024-10-01 15:59:03.333486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.828 [2024-10-01 15:59:03.333493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.828 [2024-10-01 15:59:03.333645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.828 [2024-10-01 15:59:03.333655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.828 [2024-10-01 15:59:03.333661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.828 [2024-10-01 15:59:03.334034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.828 [2024-10-01 15:59:03.334050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.828 [2024-10-01 15:59:03.334321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.828 [2024-10-01 15:59:03.334331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.828 [2024-10-01 15:59:03.334338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.828 [2024-10-01 15:59:03.334347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.828 [2024-10-01 15:59:03.334354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.828 [2024-10-01 15:59:03.334360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.828 [2024-10-01 15:59:03.334514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.828 [2024-10-01 15:59:03.334523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.828 [2024-10-01 15:59:03.344901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.828 [2024-10-01 15:59:03.344923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.828 [2024-10-01 15:59:03.345185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.828 [2024-10-01 15:59:03.345202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.828 [2024-10-01 15:59:03.345209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.828 [2024-10-01 15:59:03.345387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.828 [2024-10-01 15:59:03.345401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.828 [2024-10-01 15:59:03.345408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.828 [2024-10-01 15:59:03.345659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.828 [2024-10-01 15:59:03.345677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.828 [2024-10-01 15:59:03.345714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.828 [2024-10-01 15:59:03.345721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.828 [2024-10-01 15:59:03.345728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.828 [2024-10-01 15:59:03.345736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.828 [2024-10-01 15:59:03.345743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.828 [2024-10-01 15:59:03.345749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.828 [2024-10-01 15:59:03.345884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.828 [2024-10-01 15:59:03.345893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.828 [2024-10-01 15:59:03.356522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.828 [2024-10-01 15:59:03.356543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.828 [2024-10-01 15:59:03.356713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.828 [2024-10-01 15:59:03.356725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.828 [2024-10-01 15:59:03.356733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.828 [2024-10-01 15:59:03.356832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.828 [2024-10-01 15:59:03.356842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.828 [2024-10-01 15:59:03.356850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.828 [2024-10-01 15:59:03.356867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.828 [2024-10-01 15:59:03.356876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.828 [2024-10-01 15:59:03.356886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.828 [2024-10-01 15:59:03.356892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.828 [2024-10-01 15:59:03.356898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.828 [2024-10-01 15:59:03.356906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.828 [2024-10-01 15:59:03.356912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.828 [2024-10-01 15:59:03.356918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.828 [2024-10-01 15:59:03.356931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.828 [2024-10-01 15:59:03.356939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.828 [2024-10-01 15:59:03.368585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.828 [2024-10-01 15:59:03.368606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.828 [2024-10-01 15:59:03.368726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.828 [2024-10-01 15:59:03.368739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.828 [2024-10-01 15:59:03.368749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.828 [2024-10-01 15:59:03.368835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.828 [2024-10-01 15:59:03.368844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.828 [2024-10-01 15:59:03.368851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.828 [2024-10-01 15:59:03.368868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.828 [2024-10-01 15:59:03.368878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.828 [2024-10-01 15:59:03.368887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.828 [2024-10-01 15:59:03.368893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.828 [2024-10-01 15:59:03.368899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.828 [2024-10-01 15:59:03.368908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.828 [2024-10-01 15:59:03.368913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.828 [2024-10-01 15:59:03.368919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.828 [2024-10-01 15:59:03.368933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.828 [2024-10-01 15:59:03.368940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.828 [2024-10-01 15:59:03.379993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.828 [2024-10-01 15:59:03.380013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.828 [2024-10-01 15:59:03.380176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.828 [2024-10-01 15:59:03.380189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.828 [2024-10-01 15:59:03.380197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.828 [2024-10-01 15:59:03.380320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.828 [2024-10-01 15:59:03.380330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.828 [2024-10-01 15:59:03.380337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.828 [2024-10-01 15:59:03.380349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.828 [2024-10-01 15:59:03.380358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.828 [2024-10-01 15:59:03.380368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.828 [2024-10-01 15:59:03.380374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.828 [2024-10-01 15:59:03.380380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.828 [2024-10-01 15:59:03.380389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.828 [2024-10-01 15:59:03.380395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.828 [2024-10-01 15:59:03.380401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.828 [2024-10-01 15:59:03.380420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.828 [2024-10-01 15:59:03.380426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.828 [2024-10-01 15:59:03.391378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.828 [2024-10-01 15:59:03.391399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.828 [2024-10-01 15:59:03.391516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.828 [2024-10-01 15:59:03.391529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.829 [2024-10-01 15:59:03.391536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.829 [2024-10-01 15:59:03.391635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.829 [2024-10-01 15:59:03.391644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.829 [2024-10-01 15:59:03.391652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.829 [2024-10-01 15:59:03.391663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.829 [2024-10-01 15:59:03.391673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.829 [2024-10-01 15:59:03.391682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.829 [2024-10-01 15:59:03.391689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.829 [2024-10-01 15:59:03.391695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.829 [2024-10-01 15:59:03.391704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.829 [2024-10-01 15:59:03.391710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.829 [2024-10-01 15:59:03.391716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.829 [2024-10-01 15:59:03.391729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.829 [2024-10-01 15:59:03.391735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.829 [2024-10-01 15:59:03.401542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.829 [2024-10-01 15:59:03.401562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.829 [2024-10-01 15:59:03.401673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.829 [2024-10-01 15:59:03.401686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.829 [2024-10-01 15:59:03.401693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.829 [2024-10-01 15:59:03.401887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.829 [2024-10-01 15:59:03.401898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.829 [2024-10-01 15:59:03.401904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.829 [2024-10-01 15:59:03.401916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.829 [2024-10-01 15:59:03.401925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.829 [2024-10-01 15:59:03.401939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.829 [2024-10-01 15:59:03.401945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.829 [2024-10-01 15:59:03.401951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.829 [2024-10-01 15:59:03.401960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.829 [2024-10-01 15:59:03.401966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.829 [2024-10-01 15:59:03.401972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.829 [2024-10-01 15:59:03.401985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.829 [2024-10-01 15:59:03.401992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.829 [2024-10-01 15:59:03.411620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.829 [2024-10-01 15:59:03.411650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.829 [2024-10-01 15:59:03.411808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.829 [2024-10-01 15:59:03.411820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.829 [2024-10-01 15:59:03.411827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.829 [2024-10-01 15:59:03.411951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.829 [2024-10-01 15:59:03.411962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.829 [2024-10-01 15:59:03.411970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.829 [2024-10-01 15:59:03.411978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.829 [2024-10-01 15:59:03.411989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.829 [2024-10-01 15:59:03.411997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.829 [2024-10-01 15:59:03.412002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.829 [2024-10-01 15:59:03.412009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.829 [2024-10-01 15:59:03.412021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.829 [2024-10-01 15:59:03.412028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.829 [2024-10-01 15:59:03.412034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.829 [2024-10-01 15:59:03.412040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.829 [2024-10-01 15:59:03.412051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.829 [2024-10-01 15:59:03.422125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.829 [2024-10-01 15:59:03.422148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.829 [2024-10-01 15:59:03.422310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.829 [2024-10-01 15:59:03.422323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.829 [2024-10-01 15:59:03.422330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.829 [2024-10-01 15:59:03.422478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.829 [2024-10-01 15:59:03.422488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.829 [2024-10-01 15:59:03.422496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.829 [2024-10-01 15:59:03.422508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.829 [2024-10-01 15:59:03.422517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.829 [2024-10-01 15:59:03.422527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.829 [2024-10-01 15:59:03.422533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.829 [2024-10-01 15:59:03.422539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.829 [2024-10-01 15:59:03.422548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.829 [2024-10-01 15:59:03.422554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.829 [2024-10-01 15:59:03.422560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.829 [2024-10-01 15:59:03.422573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.829 [2024-10-01 15:59:03.422580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.829 [2024-10-01 15:59:03.433291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.829 [2024-10-01 15:59:03.433313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.829 [2024-10-01 15:59:03.433482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.829 [2024-10-01 15:59:03.433494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.829 [2024-10-01 15:59:03.433502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.829 [2024-10-01 15:59:03.433593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.829 [2024-10-01 15:59:03.433602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.829 [2024-10-01 15:59:03.433609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.829 [2024-10-01 15:59:03.433620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.829 [2024-10-01 15:59:03.433629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.829 [2024-10-01 15:59:03.433639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.829 [2024-10-01 15:59:03.433646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.829 [2024-10-01 15:59:03.433653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.829 [2024-10-01 15:59:03.433661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.829 [2024-10-01 15:59:03.433667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.829 [2024-10-01 15:59:03.433674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.829 [2024-10-01 15:59:03.433688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.829 [2024-10-01 15:59:03.433698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.829 [2024-10-01 15:59:03.444351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.829 [2024-10-01 15:59:03.444373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.829 [2024-10-01 15:59:03.444545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.829 [2024-10-01 15:59:03.444558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.829 [2024-10-01 15:59:03.444566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.829 [2024-10-01 15:59:03.444657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.829 [2024-10-01 15:59:03.444666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.830 [2024-10-01 15:59:03.444673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.830 [2024-10-01 15:59:03.444685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.830 [2024-10-01 15:59:03.444695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.830 [2024-10-01 15:59:03.444704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.830 [2024-10-01 15:59:03.444711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.830 [2024-10-01 15:59:03.444717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.830 [2024-10-01 15:59:03.444726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.830 [2024-10-01 15:59:03.444732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.830 [2024-10-01 15:59:03.444738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.830 [2024-10-01 15:59:03.444751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.830 [2024-10-01 15:59:03.444758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.830 [2024-10-01 15:59:03.454431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.830 [2024-10-01 15:59:03.454461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.830 [2024-10-01 15:59:03.454553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.830 [2024-10-01 15:59:03.454565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.830 [2024-10-01 15:59:03.454572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.830 [2024-10-01 15:59:03.454661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.830 [2024-10-01 15:59:03.454671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.830 [2024-10-01 15:59:03.454677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.830 [2024-10-01 15:59:03.454685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.830 [2024-10-01 15:59:03.454697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.830 [2024-10-01 15:59:03.454705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.830 [2024-10-01 15:59:03.454714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.830 [2024-10-01 15:59:03.454721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.830 [2024-10-01 15:59:03.454733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.830 [2024-10-01 15:59:03.454740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.830 [2024-10-01 15:59:03.454746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.830 [2024-10-01 15:59:03.454752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.830 [2024-10-01 15:59:03.454763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.830 [2024-10-01 15:59:03.464496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.830 [2024-10-01 15:59:03.464747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.830 [2024-10-01 15:59:03.464762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.830 [2024-10-01 15:59:03.464770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.830 [2024-10-01 15:59:03.464790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.830 [2024-10-01 15:59:03.464803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.830 [2024-10-01 15:59:03.464816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.830 [2024-10-01 15:59:03.464822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.830 [2024-10-01 15:59:03.464829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.830 [2024-10-01 15:59:03.464841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.830 [2024-10-01 15:59:03.465020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.830 [2024-10-01 15:59:03.465030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.830 [2024-10-01 15:59:03.465038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.830 [2024-10-01 15:59:03.465049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.830 [2024-10-01 15:59:03.465059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.830 [2024-10-01 15:59:03.465065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.830 [2024-10-01 15:59:03.465071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.830 [2024-10-01 15:59:03.465083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.830 [2024-10-01 15:59:03.475250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.830 [2024-10-01 15:59:03.475299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.830 [2024-10-01 15:59:03.475485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.830 [2024-10-01 15:59:03.475498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.830 [2024-10-01 15:59:03.475505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.830 [2024-10-01 15:59:03.475770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.830 [2024-10-01 15:59:03.475788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.830 [2024-10-01 15:59:03.475795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.830 [2024-10-01 15:59:03.475804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.830 [2024-10-01 15:59:03.475833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.830 [2024-10-01 15:59:03.475841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.830 [2024-10-01 15:59:03.475847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.830 [2024-10-01 15:59:03.475853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.830 [2024-10-01 15:59:03.475873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.830 [2024-10-01 15:59:03.475881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.830 [2024-10-01 15:59:03.475886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.830 [2024-10-01 15:59:03.475892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.830 [2024-10-01 15:59:03.475904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.830 [2024-10-01 15:59:03.485362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.830 [2024-10-01 15:59:03.485392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.830 [2024-10-01 15:59:03.485629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.830 [2024-10-01 15:59:03.485644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.830 [2024-10-01 15:59:03.485651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.830 [2024-10-01 15:59:03.485961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.830 [2024-10-01 15:59:03.485977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.830 [2024-10-01 15:59:03.485984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.830 [2024-10-01 15:59:03.485993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.830 [2024-10-01 15:59:03.486136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.830 [2024-10-01 15:59:03.486147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.830 [2024-10-01 15:59:03.486153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.830 [2024-10-01 15:59:03.486159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.830 [2024-10-01 15:59:03.486189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.830 [2024-10-01 15:59:03.486196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.830 [2024-10-01 15:59:03.486202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.830 [2024-10-01 15:59:03.486208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.830 [2024-10-01 15:59:03.486220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.830 [2024-10-01 15:59:03.495922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.830 [2024-10-01 15:59:03.495942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.830 [2024-10-01 15:59:03.496034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.830 [2024-10-01 15:59:03.496046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.830 [2024-10-01 15:59:03.496053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.830 [2024-10-01 15:59:03.496225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.830 [2024-10-01 15:59:03.496235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.830 [2024-10-01 15:59:03.496241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.830 [2024-10-01 15:59:03.496253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.830 [2024-10-01 15:59:03.496262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.831 [2024-10-01 15:59:03.496272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.831 [2024-10-01 15:59:03.496278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.831 [2024-10-01 15:59:03.496285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.831 [2024-10-01 15:59:03.496293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.831 [2024-10-01 15:59:03.496298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.831 [2024-10-01 15:59:03.496305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.831 [2024-10-01 15:59:03.496318] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.831 [2024-10-01 15:59:03.496325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.831 [2024-10-01 15:59:03.507337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.831 [2024-10-01 15:59:03.507358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.831 [2024-10-01 15:59:03.507568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.831 [2024-10-01 15:59:03.507580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.831 [2024-10-01 15:59:03.507588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.831 [2024-10-01 15:59:03.507710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.831 [2024-10-01 15:59:03.507721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.831 [2024-10-01 15:59:03.507728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.831 [2024-10-01 15:59:03.507739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.831 [2024-10-01 15:59:03.507748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.831 [2024-10-01 15:59:03.507758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.831 [2024-10-01 15:59:03.507764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.831 [2024-10-01 15:59:03.507774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.831 [2024-10-01 15:59:03.507783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.831 [2024-10-01 15:59:03.507789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.831 [2024-10-01 15:59:03.507795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.831 [2024-10-01 15:59:03.507808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.831 [2024-10-01 15:59:03.507815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.831 [2024-10-01 15:59:03.519854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.831 [2024-10-01 15:59:03.519888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.831 [2024-10-01 15:59:03.520130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.831 [2024-10-01 15:59:03.520148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.831 [2024-10-01 15:59:03.520155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.831 [2024-10-01 15:59:03.520297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.831 [2024-10-01 15:59:03.520307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.831 [2024-10-01 15:59:03.520314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.831 [2024-10-01 15:59:03.520325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.831 [2024-10-01 15:59:03.520334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.831 [2024-10-01 15:59:03.520344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.831 [2024-10-01 15:59:03.520350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.831 [2024-10-01 15:59:03.520357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.831 [2024-10-01 15:59:03.520365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.831 [2024-10-01 15:59:03.520371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.831 [2024-10-01 15:59:03.520377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.831 [2024-10-01 15:59:03.520391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.831 [2024-10-01 15:59:03.520398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.831 [2024-10-01 15:59:03.531700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.831 [2024-10-01 15:59:03.531722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.831 [2024-10-01 15:59:03.532092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.831 [2024-10-01 15:59:03.532109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.831 [2024-10-01 15:59:03.532117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.831 [2024-10-01 15:59:03.532263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.831 [2024-10-01 15:59:03.532272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.831 [2024-10-01 15:59:03.532283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.831 [2024-10-01 15:59:03.532427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.831 [2024-10-01 15:59:03.532440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.831 [2024-10-01 15:59:03.532588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.831 [2024-10-01 15:59:03.532599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.831 [2024-10-01 15:59:03.532606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.831 [2024-10-01 15:59:03.532615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.831 [2024-10-01 15:59:03.532621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.831 [2024-10-01 15:59:03.532628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.831 [2024-10-01 15:59:03.532656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.831 [2024-10-01 15:59:03.532664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.831 [2024-10-01 15:59:03.542517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.831 [2024-10-01 15:59:03.542537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.831 [2024-10-01 15:59:03.542718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.831 [2024-10-01 15:59:03.542731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.831 [2024-10-01 15:59:03.542738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.831 [2024-10-01 15:59:03.542929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.831 [2024-10-01 15:59:03.542940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.831 [2024-10-01 15:59:03.542946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.831 [2024-10-01 15:59:03.542957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.831 [2024-10-01 15:59:03.542967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.831 [2024-10-01 15:59:03.542976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.831 [2024-10-01 15:59:03.542983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.831 [2024-10-01 15:59:03.542989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.831 [2024-10-01 15:59:03.542998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.831 [2024-10-01 15:59:03.543004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.831 [2024-10-01 15:59:03.543010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.831 [2024-10-01 15:59:03.543023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.831 [2024-10-01 15:59:03.543030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.831 [2024-10-01 15:59:03.555474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.831 [2024-10-01 15:59:03.555500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.831 [2024-10-01 15:59:03.555761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.832 [2024-10-01 15:59:03.555777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.832 [2024-10-01 15:59:03.555784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.832 [2024-10-01 15:59:03.555953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.832 [2024-10-01 15:59:03.555963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.832 [2024-10-01 15:59:03.555970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.832 [2024-10-01 15:59:03.556253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.832 [2024-10-01 15:59:03.556268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.832 [2024-10-01 15:59:03.556306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.832 [2024-10-01 15:59:03.556314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.832 [2024-10-01 15:59:03.556320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.832 [2024-10-01 15:59:03.556329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.832 [2024-10-01 15:59:03.556336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.832 [2024-10-01 15:59:03.556343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.832 [2024-10-01 15:59:03.556471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.832 [2024-10-01 15:59:03.556481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.832 [2024-10-01 15:59:03.566332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.832 [2024-10-01 15:59:03.566353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.832 [2024-10-01 15:59:03.566560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.832 [2024-10-01 15:59:03.566573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.832 [2024-10-01 15:59:03.566580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.832 [2024-10-01 15:59:03.566796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.832 [2024-10-01 15:59:03.566806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.832 [2024-10-01 15:59:03.566812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.832 [2024-10-01 15:59:03.566824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.832 [2024-10-01 15:59:03.566833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.832 [2024-10-01 15:59:03.566843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.832 [2024-10-01 15:59:03.566849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.832 [2024-10-01 15:59:03.566855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.832 [2024-10-01 15:59:03.566869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.832 [2024-10-01 15:59:03.566879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.832 [2024-10-01 15:59:03.566885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.832 [2024-10-01 15:59:03.566898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.832 [2024-10-01 15:59:03.566905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.832 [2024-10-01 15:59:03.579124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.832 [2024-10-01 15:59:03.579146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.832 [2024-10-01 15:59:03.579457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.832 [2024-10-01 15:59:03.579473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.832 [2024-10-01 15:59:03.579480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.832 [2024-10-01 15:59:03.579692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.832 [2024-10-01 15:59:03.579703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.832 [2024-10-01 15:59:03.579709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.832 [2024-10-01 15:59:03.579998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.832 [2024-10-01 15:59:03.580013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.832 [2024-10-01 15:59:03.580163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.832 [2024-10-01 15:59:03.580173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.832 [2024-10-01 15:59:03.580180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.832 [2024-10-01 15:59:03.580190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.832 [2024-10-01 15:59:03.580196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.832 [2024-10-01 15:59:03.580202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.832 [2024-10-01 15:59:03.580232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.832 [2024-10-01 15:59:03.580240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.832 [2024-10-01 15:59:03.590345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.832 [2024-10-01 15:59:03.590366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.832 [2024-10-01 15:59:03.590597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.832 [2024-10-01 15:59:03.590609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.832 [2024-10-01 15:59:03.590616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.832 [2024-10-01 15:59:03.590755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.832 [2024-10-01 15:59:03.590765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.832 [2024-10-01 15:59:03.590772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.832 [2024-10-01 15:59:03.590787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.832 [2024-10-01 15:59:03.590796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.832 [2024-10-01 15:59:03.590806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.832 [2024-10-01 15:59:03.590812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.832 [2024-10-01 15:59:03.590818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.832 [2024-10-01 15:59:03.590826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.832 [2024-10-01 15:59:03.590832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.832 [2024-10-01 15:59:03.590838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.832 [2024-10-01 15:59:03.590852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.832 [2024-10-01 15:59:03.590858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.832 [2024-10-01 15:59:03.601344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.832 [2024-10-01 15:59:03.601364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.832 [2024-10-01 15:59:03.601575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.832 [2024-10-01 15:59:03.601588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.832 [2024-10-01 15:59:03.601595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.832 [2024-10-01 15:59:03.601740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.832 [2024-10-01 15:59:03.601750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.832 [2024-10-01 15:59:03.601756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.832 [2024-10-01 15:59:03.601768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.832 [2024-10-01 15:59:03.601777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.832 [2024-10-01 15:59:03.601786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.832 [2024-10-01 15:59:03.601792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.832 [2024-10-01 15:59:03.601799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.832 [2024-10-01 15:59:03.601807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.832 [2024-10-01 15:59:03.601813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.832 [2024-10-01 15:59:03.601821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.832 [2024-10-01 15:59:03.601834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.832 [2024-10-01 15:59:03.601841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.832 [2024-10-01 15:59:03.613048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.832 [2024-10-01 15:59:03.613069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.832 [2024-10-01 15:59:03.613304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.832 [2024-10-01 15:59:03.613329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.832 [2024-10-01 15:59:03.613337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.832 [2024-10-01 15:59:03.613478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.833 [2024-10-01 15:59:03.613487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.833 [2024-10-01 15:59:03.613494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.833 [2024-10-01 15:59:03.613505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.833 [2024-10-01 15:59:03.613515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.833 [2024-10-01 15:59:03.613525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.833 [2024-10-01 15:59:03.613530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.833 [2024-10-01 15:59:03.613537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.833 [2024-10-01 15:59:03.613545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.833 [2024-10-01 15:59:03.613551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.833 [2024-10-01 15:59:03.613557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.833 [2024-10-01 15:59:03.613570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.833 [2024-10-01 15:59:03.613577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.833 [2024-10-01 15:59:03.623761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.833 [2024-10-01 15:59:03.623782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.833 [2024-10-01 15:59:03.623969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.833 [2024-10-01 15:59:03.623983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.833 [2024-10-01 15:59:03.623991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.833 [2024-10-01 15:59:03.624209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.833 [2024-10-01 15:59:03.624220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.833 [2024-10-01 15:59:03.624226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.833 [2024-10-01 15:59:03.624672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.833 [2024-10-01 15:59:03.624687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.833 [2024-10-01 15:59:03.624856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.833 [2024-10-01 15:59:03.624871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.833 [2024-10-01 15:59:03.624878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.833 [2024-10-01 15:59:03.624888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.833 [2024-10-01 15:59:03.624894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.833 [2024-10-01 15:59:03.624903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.833 [2024-10-01 15:59:03.625046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.833 [2024-10-01 15:59:03.625055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.833 [2024-10-01 15:59:03.633891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.833 [2024-10-01 15:59:03.633911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.833 [2024-10-01 15:59:03.634084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.833 [2024-10-01 15:59:03.634096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.833 [2024-10-01 15:59:03.634104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.833 [2024-10-01 15:59:03.634319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.833 [2024-10-01 15:59:03.634330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.833 [2024-10-01 15:59:03.634338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.833 [2024-10-01 15:59:03.634350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.833 [2024-10-01 15:59:03.634359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.833 [2024-10-01 15:59:03.634845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.833 [2024-10-01 15:59:03.634855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.833 [2024-10-01 15:59:03.634869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.833 [2024-10-01 15:59:03.634880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.833 [2024-10-01 15:59:03.634886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.833 [2024-10-01 15:59:03.634892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.833 [2024-10-01 15:59:03.635494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.833 [2024-10-01 15:59:03.635506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.833 [2024-10-01 15:59:03.646100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.833 [2024-10-01 15:59:03.646120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.833 [2024-10-01 15:59:03.646282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.833 [2024-10-01 15:59:03.646294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.833 [2024-10-01 15:59:03.646301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.833 [2024-10-01 15:59:03.646461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.833 [2024-10-01 15:59:03.646471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.833 [2024-10-01 15:59:03.646477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.833 [2024-10-01 15:59:03.646489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.833 [2024-10-01 15:59:03.646502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.833 [2024-10-01 15:59:03.646512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.833 [2024-10-01 15:59:03.646518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.833 [2024-10-01 15:59:03.646524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.833 [2024-10-01 15:59:03.646532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.833 [2024-10-01 15:59:03.646538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.833 [2024-10-01 15:59:03.646544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.833 [2024-10-01 15:59:03.646994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.833 [2024-10-01 15:59:03.647005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.833 [2024-10-01 15:59:03.656695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.833 [2024-10-01 15:59:03.656716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.833 [2024-10-01 15:59:03.656882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.833 [2024-10-01 15:59:03.656896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.833 [2024-10-01 15:59:03.656903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.833 [2024-10-01 15:59:03.657066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.833 [2024-10-01 15:59:03.657075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.833 [2024-10-01 15:59:03.657082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.833 [2024-10-01 15:59:03.657094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.833 [2024-10-01 15:59:03.657103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.833 [2024-10-01 15:59:03.657113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.833 [2024-10-01 15:59:03.657119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.833 [2024-10-01 15:59:03.657126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.833 [2024-10-01 15:59:03.657134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.833 [2024-10-01 15:59:03.657140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.833 [2024-10-01 15:59:03.657146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.833 [2024-10-01 15:59:03.657160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.833 [2024-10-01 15:59:03.657166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.833 [2024-10-01 15:59:03.668747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.833 [2024-10-01 15:59:03.668767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.833 [2024-10-01 15:59:03.669003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.833 [2024-10-01 15:59:03.669017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.833 [2024-10-01 15:59:03.669030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.833 [2024-10-01 15:59:03.669174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.833 [2024-10-01 15:59:03.669184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.833 [2024-10-01 15:59:03.669190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.833 [2024-10-01 15:59:03.669202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.834 [2024-10-01 15:59:03.669211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.834 [2024-10-01 15:59:03.669229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.834 [2024-10-01 15:59:03.669236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.834 [2024-10-01 15:59:03.669242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.834 [2024-10-01 15:59:03.669251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.834 [2024-10-01 15:59:03.669256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.834 [2024-10-01 15:59:03.669262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.834 [2024-10-01 15:59:03.669276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.834 [2024-10-01 15:59:03.669282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.834 [2024-10-01 15:59:03.680921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.834 [2024-10-01 15:59:03.680944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.834 [2024-10-01 15:59:03.681268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.834 [2024-10-01 15:59:03.681284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.834 [2024-10-01 15:59:03.681292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.834 [2024-10-01 15:59:03.681368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.834 [2024-10-01 15:59:03.681378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.834 [2024-10-01 15:59:03.681384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.834 [2024-10-01 15:59:03.681527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.834 [2024-10-01 15:59:03.681539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.834 [2024-10-01 15:59:03.681690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.834 [2024-10-01 15:59:03.681700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.834 [2024-10-01 15:59:03.681707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.834 [2024-10-01 15:59:03.681715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.834 [2024-10-01 15:59:03.681721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.834 [2024-10-01 15:59:03.681728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.834 [2024-10-01 15:59:03.681761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.834 [2024-10-01 15:59:03.681768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.834 [2024-10-01 15:59:03.691630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.834 [2024-10-01 15:59:03.691650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.834 [2024-10-01 15:59:03.691801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.834 [2024-10-01 15:59:03.691813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.834 [2024-10-01 15:59:03.691820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.834 [2024-10-01 15:59:03.691958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.834 [2024-10-01 15:59:03.691968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.834 [2024-10-01 15:59:03.691975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.834 [2024-10-01 15:59:03.691986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.834 [2024-10-01 15:59:03.691995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.834 [2024-10-01 15:59:03.692005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.834 [2024-10-01 15:59:03.692010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.834 [2024-10-01 15:59:03.692016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.834 [2024-10-01 15:59:03.692025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.834 [2024-10-01 15:59:03.692031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.834 [2024-10-01 15:59:03.692037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.834 [2024-10-01 15:59:03.692050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.834 [2024-10-01 15:59:03.692056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.834 [2024-10-01 15:59:03.702709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.834 [2024-10-01 15:59:03.702729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.834 [2024-10-01 15:59:03.702938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.834 [2024-10-01 15:59:03.702952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.834 [2024-10-01 15:59:03.702959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.834 [2024-10-01 15:59:03.703167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.834 [2024-10-01 15:59:03.703177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.834 [2024-10-01 15:59:03.703183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.834 [2024-10-01 15:59:03.703195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.834 [2024-10-01 15:59:03.703204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.834 [2024-10-01 15:59:03.703217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.834 [2024-10-01 15:59:03.703224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.834 [2024-10-01 15:59:03.703230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.834 [2024-10-01 15:59:03.703238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.834 [2024-10-01 15:59:03.703244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.834 [2024-10-01 15:59:03.703250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.834 [2024-10-01 15:59:03.703263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.834 [2024-10-01 15:59:03.703270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.834 [2024-10-01 15:59:03.713806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.834 [2024-10-01 15:59:03.713827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.834 [2024-10-01 15:59:03.714115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.834 [2024-10-01 15:59:03.714130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.834 [2024-10-01 15:59:03.714137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.834 [2024-10-01 15:59:03.714278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.834 [2024-10-01 15:59:03.714288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.834 [2024-10-01 15:59:03.714294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.834 [2024-10-01 15:59:03.714532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.834 [2024-10-01 15:59:03.714545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.834 [2024-10-01 15:59:03.714693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.834 [2024-10-01 15:59:03.714703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.834 [2024-10-01 15:59:03.714709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.834 [2024-10-01 15:59:03.714718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.834 [2024-10-01 15:59:03.714724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.834 [2024-10-01 15:59:03.714730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.834 [2024-10-01 15:59:03.714770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.834 [2024-10-01 15:59:03.714778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.834 [2024-10-01 15:59:03.724251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.834 [2024-10-01 15:59:03.724271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.834 [2024-10-01 15:59:03.724437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.834 [2024-10-01 15:59:03.724449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.834 [2024-10-01 15:59:03.724456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.834 [2024-10-01 15:59:03.724547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.834 [2024-10-01 15:59:03.724557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.834 [2024-10-01 15:59:03.724563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.834 [2024-10-01 15:59:03.724575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.834 [2024-10-01 15:59:03.724584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.834 [2024-10-01 15:59:03.724593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.834 [2024-10-01 15:59:03.724600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.835 [2024-10-01 15:59:03.724606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.835 [2024-10-01 15:59:03.724614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.835 [2024-10-01 15:59:03.724619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.835 [2024-10-01 15:59:03.724625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.835 [2024-10-01 15:59:03.724639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.835 [2024-10-01 15:59:03.724646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.835 [2024-10-01 15:59:03.737277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.835 [2024-10-01 15:59:03.737300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.835 [2024-10-01 15:59:03.737630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-10-01 15:59:03.737647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.835 [2024-10-01 15:59:03.737654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.835 [2024-10-01 15:59:03.737843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-10-01 15:59:03.737854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.835 [2024-10-01 15:59:03.737861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.835 [2024-10-01 15:59:03.738058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.835 [2024-10-01 15:59:03.738073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.835 [2024-10-01 15:59:03.738118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.835 [2024-10-01 15:59:03.738127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.835 [2024-10-01 15:59:03.738134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.835 [2024-10-01 15:59:03.738143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.835 [2024-10-01 15:59:03.738149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.835 [2024-10-01 15:59:03.738156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.835 [2024-10-01 15:59:03.738177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.835 [2024-10-01 15:59:03.738189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.835 [2024-10-01 15:59:03.747544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.835 [2024-10-01 15:59:03.747565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.835 [2024-10-01 15:59:03.747744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-10-01 15:59:03.747757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.835 [2024-10-01 15:59:03.747764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.835 [2024-10-01 15:59:03.747983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-10-01 15:59:03.747993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.835 [2024-10-01 15:59:03.748000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.835 [2024-10-01 15:59:03.748380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.835 [2024-10-01 15:59:03.748394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.835 [2024-10-01 15:59:03.748552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.835 [2024-10-01 15:59:03.748562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.835 [2024-10-01 15:59:03.748568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.835 [2024-10-01 15:59:03.748577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.835 [2024-10-01 15:59:03.748583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.835 [2024-10-01 15:59:03.748589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.835 [2024-10-01 15:59:03.748732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.835 [2024-10-01 15:59:03.748741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.835 [2024-10-01 15:59:03.758939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.835 [2024-10-01 15:59:03.758960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.835 [2024-10-01 15:59:03.759132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-10-01 15:59:03.759145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.835 [2024-10-01 15:59:03.759153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.835 [2024-10-01 15:59:03.759319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-10-01 15:59:03.759328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.835 [2024-10-01 15:59:03.759335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.835 [2024-10-01 15:59:03.759510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.835 [2024-10-01 15:59:03.759522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.835 [2024-10-01 15:59:03.759660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.835 [2024-10-01 15:59:03.759674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.835 [2024-10-01 15:59:03.759681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.835 [2024-10-01 15:59:03.759690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.835 [2024-10-01 15:59:03.759696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.835 [2024-10-01 15:59:03.759702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.835 [2024-10-01 15:59:03.759733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.835 [2024-10-01 15:59:03.759740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.835 [2024-10-01 15:59:03.770315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.835 [2024-10-01 15:59:03.770336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.835 [2024-10-01 15:59:03.770523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-10-01 15:59:03.770536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.835 [2024-10-01 15:59:03.770543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.835 [2024-10-01 15:59:03.770732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-10-01 15:59:03.770742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.835 [2024-10-01 15:59:03.770749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.835 [2024-10-01 15:59:03.771003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.835 [2024-10-01 15:59:03.771017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.835 [2024-10-01 15:59:03.771263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.835 [2024-10-01 15:59:03.771273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.835 [2024-10-01 15:59:03.771279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.835 [2024-10-01 15:59:03.771288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.835 [2024-10-01 15:59:03.771294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.835 [2024-10-01 15:59:03.771300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.835 [2024-10-01 15:59:03.771450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.835 [2024-10-01 15:59:03.771459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.835 [2024-10-01 15:59:03.780996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.835 [2024-10-01 15:59:03.781016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.835 [2024-10-01 15:59:03.781176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.835 [2024-10-01 15:59:03.781189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.835 [2024-10-01 15:59:03.781195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.835 [2024-10-01 15:59:03.781419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-10-01 15:59:03.781433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.836 [2024-10-01 15:59:03.781439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.836 [2024-10-01 15:59:03.781451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.836 [2024-10-01 15:59:03.781460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.836 [2024-10-01 15:59:03.781470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.836 [2024-10-01 15:59:03.781476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.836 [2024-10-01 15:59:03.781482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.836 [2024-10-01 15:59:03.781490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.836 [2024-10-01 15:59:03.781496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.836 [2024-10-01 15:59:03.781502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.836 [2024-10-01 15:59:03.781515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.836 [2024-10-01 15:59:03.781521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.836 [2024-10-01 15:59:03.793470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.836 [2024-10-01 15:59:03.793491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.836 [2024-10-01 15:59:03.793662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-10-01 15:59:03.793675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.836 [2024-10-01 15:59:03.793682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.836 [2024-10-01 15:59:03.793851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-10-01 15:59:03.793861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.836 [2024-10-01 15:59:03.793874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.836 [2024-10-01 15:59:03.793886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.836 [2024-10-01 15:59:03.793895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.836 [2024-10-01 15:59:03.793905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.836 [2024-10-01 15:59:03.793910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.836 [2024-10-01 15:59:03.793917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.836 [2024-10-01 15:59:03.793925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.836 [2024-10-01 15:59:03.793931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.836 [2024-10-01 15:59:03.793937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.836 [2024-10-01 15:59:03.793950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.836 [2024-10-01 15:59:03.793956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.836 [2024-10-01 15:59:03.805112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.836 [2024-10-01 15:59:03.805147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.836 [2024-10-01 15:59:03.805708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-10-01 15:59:03.805725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.836 [2024-10-01 15:59:03.805733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.836 [2024-10-01 15:59:03.805874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-10-01 15:59:03.805885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.836 [2024-10-01 15:59:03.805892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.836 [2024-10-01 15:59:03.806074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.836 [2024-10-01 15:59:03.806087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.836 [2024-10-01 15:59:03.806116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.836 [2024-10-01 15:59:03.806124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.836 [2024-10-01 15:59:03.806131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.836 [2024-10-01 15:59:03.806140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.836 [2024-10-01 15:59:03.806159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.836 [2024-10-01 15:59:03.806166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.836 [2024-10-01 15:59:03.806180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.836 [2024-10-01 15:59:03.806187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.836 [2024-10-01 15:59:03.815699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.836 [2024-10-01 15:59:03.815720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.836 [2024-10-01 15:59:03.816077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-10-01 15:59:03.816093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.836 [2024-10-01 15:59:03.816101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.836 [2024-10-01 15:59:03.816261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-10-01 15:59:03.816272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.836 [2024-10-01 15:59:03.816278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.836 [2024-10-01 15:59:03.816422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.836 [2024-10-01 15:59:03.816435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.836 [2024-10-01 15:59:03.816573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.836 [2024-10-01 15:59:03.816582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.836 [2024-10-01 15:59:03.816592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.836 [2024-10-01 15:59:03.816601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.836 [2024-10-01 15:59:03.816607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.836 [2024-10-01 15:59:03.816613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.836 [2024-10-01 15:59:03.816643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.836 [2024-10-01 15:59:03.816650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.836 [2024-10-01 15:59:03.826753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.836 [2024-10-01 15:59:03.826776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.836 [2024-10-01 15:59:03.826908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-10-01 15:59:03.826922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.836 [2024-10-01 15:59:03.826929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.836 [2024-10-01 15:59:03.827073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-10-01 15:59:03.827083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.836 [2024-10-01 15:59:03.827090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.836 [2024-10-01 15:59:03.827427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.836 [2024-10-01 15:59:03.827441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.836 [2024-10-01 15:59:03.827702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.836 [2024-10-01 15:59:03.827712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.836 [2024-10-01 15:59:03.827719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.836 [2024-10-01 15:59:03.827730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.836 [2024-10-01 15:59:03.827736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.836 [2024-10-01 15:59:03.827742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.836 [2024-10-01 15:59:03.827793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.836 [2024-10-01 15:59:03.827802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.836 [2024-10-01 15:59:03.838411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.836 [2024-10-01 15:59:03.838432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.836 [2024-10-01 15:59:03.838698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-10-01 15:59:03.838712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.836 [2024-10-01 15:59:03.838719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.836 [2024-10-01 15:59:03.838909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.836 [2024-10-01 15:59:03.838920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.836 [2024-10-01 15:59:03.838930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.837 [2024-10-01 15:59:03.839061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.837 [2024-10-01 15:59:03.839072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.837 [2024-10-01 15:59:03.839267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.837 [2024-10-01 15:59:03.839277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.837 [2024-10-01 15:59:03.839284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.837 [2024-10-01 15:59:03.839294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.837 [2024-10-01 15:59:03.839299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.837 [2024-10-01 15:59:03.839305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.837 [2024-10-01 15:59:03.839344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.837 [2024-10-01 15:59:03.839352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.837 [2024-10-01 15:59:03.848607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.837 [2024-10-01 15:59:03.848629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.837 [2024-10-01 15:59:03.848858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-10-01 15:59:03.848876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.837 [2024-10-01 15:59:03.848884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.837 [2024-10-01 15:59:03.848988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-10-01 15:59:03.848998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.837 [2024-10-01 15:59:03.849004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.837 [2024-10-01 15:59:03.849344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.837 [2024-10-01 15:59:03.849357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.837 [2024-10-01 15:59:03.849518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.837 [2024-10-01 15:59:03.849528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.837 [2024-10-01 15:59:03.849535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.837 [2024-10-01 15:59:03.849544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.837 [2024-10-01 15:59:03.849550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.837 [2024-10-01 15:59:03.849556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.837 [2024-10-01 15:59:03.849729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.837 [2024-10-01 15:59:03.849739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.837 [2024-10-01 15:59:03.859334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.837 [2024-10-01 15:59:03.859357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.837 [2024-10-01 15:59:03.859520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-10-01 15:59:03.859533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.837 [2024-10-01 15:59:03.859540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.837 [2024-10-01 15:59:03.859690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-10-01 15:59:03.859700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.837 [2024-10-01 15:59:03.859706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.837 [2024-10-01 15:59:03.859718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.837 [2024-10-01 15:59:03.859727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.837 [2024-10-01 15:59:03.859737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.837 [2024-10-01 15:59:03.859743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.837 [2024-10-01 15:59:03.859749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.837 [2024-10-01 15:59:03.859757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.837 [2024-10-01 15:59:03.859763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.837 [2024-10-01 15:59:03.859769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.837 [2024-10-01 15:59:03.859782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.837 [2024-10-01 15:59:03.859789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.837 [2024-10-01 15:59:03.871387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.837 [2024-10-01 15:59:03.871409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.837 [2024-10-01 15:59:03.871809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-10-01 15:59:03.871825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.837 [2024-10-01 15:59:03.871833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.837 [2024-10-01 15:59:03.871980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-10-01 15:59:03.871990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.837 [2024-10-01 15:59:03.871996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.837 [2024-10-01 15:59:03.872248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.837 [2024-10-01 15:59:03.872262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.837 [2024-10-01 15:59:03.872409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.837 [2024-10-01 15:59:03.872419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.837 [2024-10-01 15:59:03.872426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.837 [2024-10-01 15:59:03.872439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.837 [2024-10-01 15:59:03.872445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.837 [2024-10-01 15:59:03.872451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.837 [2024-10-01 15:59:03.872480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.837 [2024-10-01 15:59:03.872488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.837 [2024-10-01 15:59:03.883195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.837 [2024-10-01 15:59:03.883216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.837 [2024-10-01 15:59:03.883428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-10-01 15:59:03.883440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.837 [2024-10-01 15:59:03.883447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.837 [2024-10-01 15:59:03.883584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-10-01 15:59:03.883593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.837 [2024-10-01 15:59:03.883600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.837 [2024-10-01 15:59:03.883611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.837 [2024-10-01 15:59:03.883620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.837 [2024-10-01 15:59:03.883629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.837 [2024-10-01 15:59:03.883636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.837 [2024-10-01 15:59:03.883642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.837 [2024-10-01 15:59:03.883650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.837 [2024-10-01 15:59:03.883656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.837 [2024-10-01 15:59:03.883662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.837 [2024-10-01 15:59:03.883675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.837 [2024-10-01 15:59:03.883682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.837 [2024-10-01 15:59:03.895417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.837 [2024-10-01 15:59:03.895438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.837 [2024-10-01 15:59:03.895624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-10-01 15:59:03.895636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.837 [2024-10-01 15:59:03.895643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.837 [2024-10-01 15:59:03.895842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.837 [2024-10-01 15:59:03.895859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.837 [2024-10-01 15:59:03.895872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.837 [2024-10-01 15:59:03.895887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.837 [2024-10-01 15:59:03.895896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.837 [2024-10-01 15:59:03.895906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.838 [2024-10-01 15:59:03.895912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.838 [2024-10-01 15:59:03.895918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.838 [2024-10-01 15:59:03.895926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.838 [2024-10-01 15:59:03.895932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.838 [2024-10-01 15:59:03.895938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.838 [2024-10-01 15:59:03.895951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.838 [2024-10-01 15:59:03.895958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.838 [2024-10-01 15:59:03.907873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.838 [2024-10-01 15:59:03.907895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.838 [2024-10-01 15:59:03.908127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-10-01 15:59:03.908139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.838 [2024-10-01 15:59:03.908147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.838 [2024-10-01 15:59:03.908361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-10-01 15:59:03.908372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.838 [2024-10-01 15:59:03.908378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.838 [2024-10-01 15:59:03.908390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.838 [2024-10-01 15:59:03.908399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.838 [2024-10-01 15:59:03.908417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.838 [2024-10-01 15:59:03.908424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.838 [2024-10-01 15:59:03.908431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.838 [2024-10-01 15:59:03.908439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.838 [2024-10-01 15:59:03.908445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.838 [2024-10-01 15:59:03.908451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.838 [2024-10-01 15:59:03.908465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.838 [2024-10-01 15:59:03.908471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.838 [2024-10-01 15:59:03.920655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.838 [2024-10-01 15:59:03.920676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.838 [2024-10-01 15:59:03.920833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-10-01 15:59:03.920846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.838 [2024-10-01 15:59:03.920853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.838 [2024-10-01 15:59:03.920942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-10-01 15:59:03.920952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.838 [2024-10-01 15:59:03.920959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.838 [2024-10-01 15:59:03.920970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.838 [2024-10-01 15:59:03.920979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.838 [2024-10-01 15:59:03.920989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.838 [2024-10-01 15:59:03.920995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.838 [2024-10-01 15:59:03.921002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.838 [2024-10-01 15:59:03.921010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.838 [2024-10-01 15:59:03.921016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.838 [2024-10-01 15:59:03.921022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.838 [2024-10-01 15:59:03.921035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.838 [2024-10-01 15:59:03.921042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.838 [2024-10-01 15:59:03.932528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.838 [2024-10-01 15:59:03.932550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.838 [2024-10-01 15:59:03.932913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-10-01 15:59:03.932930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.838 [2024-10-01 15:59:03.932938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.838 [2024-10-01 15:59:03.933161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-10-01 15:59:03.933172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.838 [2024-10-01 15:59:03.933179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.838 [2024-10-01 15:59:03.933377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.838 [2024-10-01 15:59:03.933390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.838 [2024-10-01 15:59:03.933413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.838 [2024-10-01 15:59:03.933420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.838 [2024-10-01 15:59:03.933427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.838 [2024-10-01 15:59:03.933436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.838 [2024-10-01 15:59:03.933445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.838 [2024-10-01 15:59:03.933451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.838 [2024-10-01 15:59:03.933579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.838 [2024-10-01 15:59:03.933588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.838 [2024-10-01 15:59:03.944337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.838 [2024-10-01 15:59:03.944358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.838 [2024-10-01 15:59:03.944734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-10-01 15:59:03.944750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.838 [2024-10-01 15:59:03.944757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.838 [2024-10-01 15:59:03.944952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-10-01 15:59:03.944964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.838 [2024-10-01 15:59:03.944971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.838 [2024-10-01 15:59:03.945115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.838 [2024-10-01 15:59:03.945127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.838 [2024-10-01 15:59:03.945177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.838 [2024-10-01 15:59:03.945186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.838 [2024-10-01 15:59:03.945193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.838 [2024-10-01 15:59:03.945201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.838 [2024-10-01 15:59:03.945207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.838 [2024-10-01 15:59:03.945213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.838 [2024-10-01 15:59:03.945337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.838 [2024-10-01 15:59:03.945346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.838 [2024-10-01 15:59:03.955145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.838 [2024-10-01 15:59:03.955167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.838 [2024-10-01 15:59:03.955529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-10-01 15:59:03.955545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.838 [2024-10-01 15:59:03.955553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.838 [2024-10-01 15:59:03.955700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.838 [2024-10-01 15:59:03.955710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.838 [2024-10-01 15:59:03.955716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.838 [2024-10-01 15:59:03.955859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.838 [2024-10-01 15:59:03.955884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.838 [2024-10-01 15:59:03.956023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.838 [2024-10-01 15:59:03.956034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.838 [2024-10-01 15:59:03.956041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.838 [2024-10-01 15:59:03.956049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.839 [2024-10-01 15:59:03.956055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.839 [2024-10-01 15:59:03.956062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.839 [2024-10-01 15:59:03.956091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.839 [2024-10-01 15:59:03.956099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.839 [2024-10-01 15:59:03.967070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.839 [2024-10-01 15:59:03.967090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.839 [2024-10-01 15:59:03.967301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-10-01 15:59:03.967313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.839 [2024-10-01 15:59:03.967320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.839 [2024-10-01 15:59:03.967484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-10-01 15:59:03.967493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.839 [2024-10-01 15:59:03.967500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.839 [2024-10-01 15:59:03.967512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.839 [2024-10-01 15:59:03.967520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.839 [2024-10-01 15:59:03.967530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.839 [2024-10-01 15:59:03.967536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.839 [2024-10-01 15:59:03.967542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.839 [2024-10-01 15:59:03.967551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.839 [2024-10-01 15:59:03.967556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.839 [2024-10-01 15:59:03.967562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.839 [2024-10-01 15:59:03.967576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.839 [2024-10-01 15:59:03.967582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.839 [2024-10-01 15:59:03.978689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.839 [2024-10-01 15:59:03.978712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.839 [2024-10-01 15:59:03.978795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-10-01 15:59:03.978807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.839 [2024-10-01 15:59:03.978818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.839 [2024-10-01 15:59:03.978983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-10-01 15:59:03.978993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.839 [2024-10-01 15:59:03.978999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.839 [2024-10-01 15:59:03.979011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.839 [2024-10-01 15:59:03.979020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.839 [2024-10-01 15:59:03.979029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.839 [2024-10-01 15:59:03.979035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.839 [2024-10-01 15:59:03.979041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.839 [2024-10-01 15:59:03.979049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.839 [2024-10-01 15:59:03.979055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.839 [2024-10-01 15:59:03.979062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.839 [2024-10-01 15:59:03.979075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.839 [2024-10-01 15:59:03.979081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.839 [2024-10-01 15:59:03.990560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.839 [2024-10-01 15:59:03.990582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.839 [2024-10-01 15:59:03.990890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-10-01 15:59:03.990908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.839 [2024-10-01 15:59:03.990916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.839 [2024-10-01 15:59:03.991132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-10-01 15:59:03.991143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.839 [2024-10-01 15:59:03.991150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.839 [2024-10-01 15:59:03.991328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.839 [2024-10-01 15:59:03.991342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.839 [2024-10-01 15:59:03.991367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.839 [2024-10-01 15:59:03.991375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.839 [2024-10-01 15:59:03.991382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.839 [2024-10-01 15:59:03.991391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.839 [2024-10-01 15:59:03.991397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.839 [2024-10-01 15:59:03.991408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.839 [2024-10-01 15:59:03.991422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.839 [2024-10-01 15:59:03.991428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.839 11367.08 IOPS, 44.40 MiB/s [2024-10-01 15:59:04.002212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.839 [2024-10-01 15:59:04.002230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.839 [2024-10-01 15:59:04.002396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-10-01 15:59:04.002408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.839 [2024-10-01 15:59:04.002416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.839 [2024-10-01 15:59:04.002634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-10-01 15:59:04.002643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.839 [2024-10-01 15:59:04.002650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.839 [2024-10-01 15:59:04.003438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.839 [2024-10-01 15:59:04.003454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.839 [2024-10-01 15:59:04.003641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.839 [2024-10-01 15:59:04.003652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.839 [2024-10-01 15:59:04.003660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.839 [2024-10-01 15:59:04.003669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.839 [2024-10-01 15:59:04.003675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.839 [2024-10-01 15:59:04.003682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.839 [2024-10-01 15:59:04.003705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.839 [2024-10-01 15:59:04.003712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.839 [2024-10-01 15:59:04.012831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.839 [2024-10-01 15:59:04.012853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.839 [2024-10-01 15:59:04.013049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-10-01 15:59:04.013061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.839 [2024-10-01 15:59:04.013069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.839 [2024-10-01 15:59:04.013231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.839 [2024-10-01 15:59:04.013241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.839 [2024-10-01 15:59:04.013248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.839 [2024-10-01 15:59:04.013260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.839 [2024-10-01 15:59:04.013269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.839 [2024-10-01 15:59:04.013283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.839 [2024-10-01 15:59:04.013289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.839 [2024-10-01 15:59:04.013296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.839 [2024-10-01 15:59:04.013306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.839 [2024-10-01 15:59:04.013312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.839 [2024-10-01 15:59:04.013319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.839 [2024-10-01 15:59:04.013333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.840 [2024-10-01 15:59:04.013340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.840 [2024-10-01 15:59:04.024111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.840 [2024-10-01 15:59:04.024132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.840 [2024-10-01 15:59:04.024462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-10-01 15:59:04.024478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.840 [2024-10-01 15:59:04.024486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.840 [2024-10-01 15:59:04.024657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-10-01 15:59:04.024667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.840 [2024-10-01 15:59:04.024673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.840 [2024-10-01 15:59:04.024929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.840 [2024-10-01 15:59:04.024943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.840 [2024-10-01 15:59:04.025090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.840 [2024-10-01 15:59:04.025100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.840 [2024-10-01 15:59:04.025107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.840 [2024-10-01 15:59:04.025116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.840 [2024-10-01 15:59:04.025122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.840 [2024-10-01 15:59:04.025128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.840 [2024-10-01 15:59:04.025154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.840 [2024-10-01 15:59:04.025161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.840 [2024-10-01 15:59:04.036024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.840 [2024-10-01 15:59:04.036044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.840 [2024-10-01 15:59:04.036368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-10-01 15:59:04.036383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.840 [2024-10-01 15:59:04.036395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.840 [2024-10-01 15:59:04.036484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-10-01 15:59:04.036493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.840 [2024-10-01 15:59:04.036500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.840 [2024-10-01 15:59:04.036642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.840 [2024-10-01 15:59:04.036654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.840 [2024-10-01 15:59:04.036802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.840 [2024-10-01 15:59:04.036811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.840 [2024-10-01 15:59:04.036818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.840 [2024-10-01 15:59:04.036827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.840 [2024-10-01 15:59:04.036833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.840 [2024-10-01 15:59:04.036839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.840 [2024-10-01 15:59:04.036873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.840 [2024-10-01 15:59:04.036880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.840 [2024-10-01 15:59:04.046467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.840 [2024-10-01 15:59:04.046488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.840 [2024-10-01 15:59:04.046708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-10-01 15:59:04.046721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.840 [2024-10-01 15:59:04.046728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.840 [2024-10-01 15:59:04.046873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-10-01 15:59:04.046884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.840 [2024-10-01 15:59:04.046891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.840 [2024-10-01 15:59:04.046903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.840 [2024-10-01 15:59:04.046912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.840 [2024-10-01 15:59:04.046921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.840 [2024-10-01 15:59:04.046927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.840 [2024-10-01 15:59:04.046934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.840 [2024-10-01 15:59:04.046942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.840 [2024-10-01 15:59:04.046947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.840 [2024-10-01 15:59:04.046953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.840 [2024-10-01 15:59:04.046971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.840 [2024-10-01 15:59:04.046978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.840 [2024-10-01 15:59:04.056991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.840 [2024-10-01 15:59:04.057013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.840 [2024-10-01 15:59:04.057191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-10-01 15:59:04.057203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.840 [2024-10-01 15:59:04.057213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.840 [2024-10-01 15:59:04.057311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-10-01 15:59:04.057320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.840 [2024-10-01 15:59:04.057327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.840 [2024-10-01 15:59:04.057580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.840 [2024-10-01 15:59:04.057592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.840 [2024-10-01 15:59:04.058188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.840 [2024-10-01 15:59:04.058200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.840 [2024-10-01 15:59:04.058207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.840 [2024-10-01 15:59:04.058216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.840 [2024-10-01 15:59:04.058222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.840 [2024-10-01 15:59:04.058228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.840 [2024-10-01 15:59:04.058715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.840 [2024-10-01 15:59:04.058727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.840 [2024-10-01 15:59:04.068805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.840 [2024-10-01 15:59:04.068825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.840 [2024-10-01 15:59:04.068960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-10-01 15:59:04.068973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.840 [2024-10-01 15:59:04.068981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.840 [2024-10-01 15:59:04.069117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.840 [2024-10-01 15:59:04.069127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.840 [2024-10-01 15:59:04.069133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.840 [2024-10-01 15:59:04.069256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.840 [2024-10-01 15:59:04.069269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.841 [2024-10-01 15:59:04.069352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.841 [2024-10-01 15:59:04.069365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.841 [2024-10-01 15:59:04.069371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.841 [2024-10-01 15:59:04.069381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.841 [2024-10-01 15:59:04.069387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.841 [2024-10-01 15:59:04.069393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.841 [2024-10-01 15:59:04.069417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.841 [2024-10-01 15:59:04.069424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.841 [2024-10-01 15:59:04.079268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.841 [2024-10-01 15:59:04.079288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.841 [2024-10-01 15:59:04.079438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-10-01 15:59:04.079451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.841 [2024-10-01 15:59:04.079458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.841 [2024-10-01 15:59:04.079525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-10-01 15:59:04.079534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.841 [2024-10-01 15:59:04.079541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.841 [2024-10-01 15:59:04.079553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.841 [2024-10-01 15:59:04.079561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.841 [2024-10-01 15:59:04.079572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.841 [2024-10-01 15:59:04.079578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.841 [2024-10-01 15:59:04.079585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.841 [2024-10-01 15:59:04.079593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.841 [2024-10-01 15:59:04.079598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.841 [2024-10-01 15:59:04.079604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.841 [2024-10-01 15:59:04.079618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.841 [2024-10-01 15:59:04.079624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.841 [2024-10-01 15:59:04.092021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.841 [2024-10-01 15:59:04.092042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.841 [2024-10-01 15:59:04.092515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-10-01 15:59:04.092532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.841 [2024-10-01 15:59:04.092540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.841 [2024-10-01 15:59:04.092599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-10-01 15:59:04.092608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.841 [2024-10-01 15:59:04.092615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.841 [2024-10-01 15:59:04.093260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.841 [2024-10-01 15:59:04.093278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.841 [2024-10-01 15:59:04.093654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.841 [2024-10-01 15:59:04.093665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.841 [2024-10-01 15:59:04.093672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.841 [2024-10-01 15:59:04.093681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.841 [2024-10-01 15:59:04.093688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.841 [2024-10-01 15:59:04.093694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.841 [2024-10-01 15:59:04.093743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.841 [2024-10-01 15:59:04.093751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.841 [2024-10-01 15:59:04.102741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.841 [2024-10-01 15:59:04.102762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.841 [2024-10-01 15:59:04.102941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-10-01 15:59:04.102955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.841 [2024-10-01 15:59:04.102963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.841 [2024-10-01 15:59:04.103131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-10-01 15:59:04.103139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.841 [2024-10-01 15:59:04.103146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.841 [2024-10-01 15:59:04.103308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.841 [2024-10-01 15:59:04.103320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.841 [2024-10-01 15:59:04.103463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.841 [2024-10-01 15:59:04.103472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.841 [2024-10-01 15:59:04.103478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.841 [2024-10-01 15:59:04.103488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.841 [2024-10-01 15:59:04.103494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.841 [2024-10-01 15:59:04.103500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.841 [2024-10-01 15:59:04.103642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.841 [2024-10-01 15:59:04.103651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.841 [2024-10-01 15:59:04.113347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.841 [2024-10-01 15:59:04.113368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.841 [2024-10-01 15:59:04.113519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-10-01 15:59:04.113532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.841 [2024-10-01 15:59:04.113540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.841 [2024-10-01 15:59:04.113730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-10-01 15:59:04.113740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.841 [2024-10-01 15:59:04.113747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.841 [2024-10-01 15:59:04.113759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.841 [2024-10-01 15:59:04.113768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.841 [2024-10-01 15:59:04.113777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.841 [2024-10-01 15:59:04.113783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.841 [2024-10-01 15:59:04.113790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.841 [2024-10-01 15:59:04.113798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.841 [2024-10-01 15:59:04.113804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.841 [2024-10-01 15:59:04.113810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.841 [2024-10-01 15:59:04.113824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.841 [2024-10-01 15:59:04.113830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.841 [2024-10-01 15:59:04.125561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.841 [2024-10-01 15:59:04.125582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.841 [2024-10-01 15:59:04.125730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-10-01 15:59:04.125742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.841 [2024-10-01 15:59:04.125749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.841 [2024-10-01 15:59:04.125890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.841 [2024-10-01 15:59:04.125900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.841 [2024-10-01 15:59:04.125907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.841 [2024-10-01 15:59:04.125918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.841 [2024-10-01 15:59:04.125927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.841 [2024-10-01 15:59:04.125937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.841 [2024-10-01 15:59:04.125943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.842 [2024-10-01 15:59:04.125953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.842 [2024-10-01 15:59:04.125962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.842 [2024-10-01 15:59:04.125968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.842 [2024-10-01 15:59:04.125973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.842 [2024-10-01 15:59:04.126361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.842 [2024-10-01 15:59:04.126371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.842 [2024-10-01 15:59:04.137750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.842 [2024-10-01 15:59:04.137771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.842 [2024-10-01 15:59:04.137966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-10-01 15:59:04.137980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.842 [2024-10-01 15:59:04.137987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.842 [2024-10-01 15:59:04.138131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-10-01 15:59:04.138140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.842 [2024-10-01 15:59:04.138147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.842 [2024-10-01 15:59:04.138302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.842 [2024-10-01 15:59:04.138315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.842 [2024-10-01 15:59:04.138454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.842 [2024-10-01 15:59:04.138466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.842 [2024-10-01 15:59:04.138473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.842 [2024-10-01 15:59:04.138482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.842 [2024-10-01 15:59:04.138488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.842 [2024-10-01 15:59:04.138494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.842 [2024-10-01 15:59:04.138638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.842 [2024-10-01 15:59:04.138647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.842 [2024-10-01 15:59:04.148305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.842 [2024-10-01 15:59:04.148326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.842 [2024-10-01 15:59:04.148490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-10-01 15:59:04.148503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.842 [2024-10-01 15:59:04.148510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.842 [2024-10-01 15:59:04.148634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-10-01 15:59:04.148643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.842 [2024-10-01 15:59:04.148653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.842 [2024-10-01 15:59:04.148665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.842 [2024-10-01 15:59:04.148673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.842 [2024-10-01 15:59:04.148683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.842 [2024-10-01 15:59:04.148689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.842 [2024-10-01 15:59:04.148695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.842 [2024-10-01 15:59:04.148704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.842 [2024-10-01 15:59:04.148709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.842 [2024-10-01 15:59:04.148715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.842 [2024-10-01 15:59:04.148728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.842 [2024-10-01 15:59:04.148735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.842 [2024-10-01 15:59:04.160442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.842 [2024-10-01 15:59:04.160463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.842 [2024-10-01 15:59:04.160782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-10-01 15:59:04.160797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.842 [2024-10-01 15:59:04.160805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.842 [2024-10-01 15:59:04.160948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-10-01 15:59:04.160959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.842 [2024-10-01 15:59:04.160965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.842 [2024-10-01 15:59:04.161109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.842 [2024-10-01 15:59:04.161121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.842 [2024-10-01 15:59:04.161146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.842 [2024-10-01 15:59:04.161154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.842 [2024-10-01 15:59:04.161160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.842 [2024-10-01 15:59:04.161168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.842 [2024-10-01 15:59:04.161174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.842 [2024-10-01 15:59:04.161180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.842 [2024-10-01 15:59:04.161194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.842 [2024-10-01 15:59:04.161200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.842 [2024-10-01 15:59:04.171310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.842 [2024-10-01 15:59:04.171334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.842 [2024-10-01 15:59:04.171570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-10-01 15:59:04.171582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.842 [2024-10-01 15:59:04.171590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.842 [2024-10-01 15:59:04.171807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-10-01 15:59:04.171817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.842 [2024-10-01 15:59:04.171823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.842 [2024-10-01 15:59:04.171835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.842 [2024-10-01 15:59:04.171844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.842 [2024-10-01 15:59:04.171854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.842 [2024-10-01 15:59:04.171860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.842 [2024-10-01 15:59:04.171871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.842 [2024-10-01 15:59:04.171879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.842 [2024-10-01 15:59:04.171885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.842 [2024-10-01 15:59:04.171891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.842 [2024-10-01 15:59:04.171904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.842 [2024-10-01 15:59:04.171911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.842 [2024-10-01 15:59:04.183704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.842 [2024-10-01 15:59:04.183725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.842 [2024-10-01 15:59:04.183957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-10-01 15:59:04.183970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.842 [2024-10-01 15:59:04.183978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.842 [2024-10-01 15:59:04.184141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.842 [2024-10-01 15:59:04.184151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.842 [2024-10-01 15:59:04.184157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.842 [2024-10-01 15:59:04.184569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.842 [2024-10-01 15:59:04.184584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.842 [2024-10-01 15:59:04.184693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.842 [2024-10-01 15:59:04.184701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.842 [2024-10-01 15:59:04.184707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.842 [2024-10-01 15:59:04.184720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.842 [2024-10-01 15:59:04.184726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.843 [2024-10-01 15:59:04.184732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.843 [2024-10-01 15:59:04.184810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.843 [2024-10-01 15:59:04.184819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.843 [2024-10-01 15:59:04.193784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.843 [2024-10-01 15:59:04.193981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.843 [2024-10-01 15:59:04.194192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-10-01 15:59:04.194207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.843 [2024-10-01 15:59:04.194215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.843 [2024-10-01 15:59:04.194489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-10-01 15:59:04.194504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.843 [2024-10-01 15:59:04.194511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.843 [2024-10-01 15:59:04.194520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.843 [2024-10-01 15:59:04.195000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.843 [2024-10-01 15:59:04.195013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.843 [2024-10-01 15:59:04.195019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.843 [2024-10-01 15:59:04.195026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.843 [2024-10-01 15:59:04.195255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.843 [2024-10-01 15:59:04.195266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.843 [2024-10-01 15:59:04.195271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.843 [2024-10-01 15:59:04.195278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.843 [2024-10-01 15:59:04.195424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.843 [2024-10-01 15:59:04.205195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.843 [2024-10-01 15:59:04.205215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.843 [2024-10-01 15:59:04.205374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-10-01 15:59:04.205386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.843 [2024-10-01 15:59:04.205394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.843 [2024-10-01 15:59:04.205541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-10-01 15:59:04.205551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.843 [2024-10-01 15:59:04.205558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.843 [2024-10-01 15:59:04.205847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.843 [2024-10-01 15:59:04.205868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.843 [2024-10-01 15:59:04.206029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.843 [2024-10-01 15:59:04.206040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.843 [2024-10-01 15:59:04.206046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.843 [2024-10-01 15:59:04.206056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.843 [2024-10-01 15:59:04.206062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.843 [2024-10-01 15:59:04.206068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.843 [2024-10-01 15:59:04.206475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.843 [2024-10-01 15:59:04.206487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.843 [2024-10-01 15:59:04.215722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.843 [2024-10-01 15:59:04.215742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.843 [2024-10-01 15:59:04.215934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-10-01 15:59:04.215947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.843 [2024-10-01 15:59:04.215955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.843 [2024-10-01 15:59:04.216147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-10-01 15:59:04.216157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.843 [2024-10-01 15:59:04.216164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.843 [2024-10-01 15:59:04.216175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.843 [2024-10-01 15:59:04.216184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.843 [2024-10-01 15:59:04.216194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.843 [2024-10-01 15:59:04.216201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.843 [2024-10-01 15:59:04.216208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.843 [2024-10-01 15:59:04.216217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.843 [2024-10-01 15:59:04.216223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.843 [2024-10-01 15:59:04.216229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.843 [2024-10-01 15:59:04.216242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.843 [2024-10-01 15:59:04.216249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.843 [2024-10-01 15:59:04.227938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.843 [2024-10-01 15:59:04.227959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.843 [2024-10-01 15:59:04.228350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-10-01 15:59:04.228366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.843 [2024-10-01 15:59:04.228374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.843 [2024-10-01 15:59:04.228524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-10-01 15:59:04.228534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.843 [2024-10-01 15:59:04.228540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.843 [2024-10-01 15:59:04.228745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.843 [2024-10-01 15:59:04.228760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.843 [2024-10-01 15:59:04.228966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.843 [2024-10-01 15:59:04.228978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.843 [2024-10-01 15:59:04.228985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.843 [2024-10-01 15:59:04.228994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.843 [2024-10-01 15:59:04.229000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.843 [2024-10-01 15:59:04.229006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.843 [2024-10-01 15:59:04.229044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.843 [2024-10-01 15:59:04.229052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.843 [2024-10-01 15:59:04.239934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.843 [2024-10-01 15:59:04.239956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.843 [2024-10-01 15:59:04.240344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-10-01 15:59:04.240360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.843 [2024-10-01 15:59:04.240368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.843 [2024-10-01 15:59:04.240585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.843 [2024-10-01 15:59:04.240595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.843 [2024-10-01 15:59:04.240602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.843 [2024-10-01 15:59:04.240746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.843 [2024-10-01 15:59:04.240759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.843 [2024-10-01 15:59:04.240967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.843 [2024-10-01 15:59:04.240978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.843 [2024-10-01 15:59:04.240985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.843 [2024-10-01 15:59:04.240995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.843 [2024-10-01 15:59:04.241007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.843 [2024-10-01 15:59:04.241013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.843 [2024-10-01 15:59:04.241045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.843 [2024-10-01 15:59:04.241054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.844 [2024-10-01 15:59:04.250510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.844 [2024-10-01 15:59:04.250532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.844 [2024-10-01 15:59:04.250854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-10-01 15:59:04.250876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.844 [2024-10-01 15:59:04.250885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.844 [2024-10-01 15:59:04.251078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-10-01 15:59:04.251089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.844 [2024-10-01 15:59:04.251096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.844 [2024-10-01 15:59:04.251244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.844 [2024-10-01 15:59:04.251256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.844 [2024-10-01 15:59:04.251284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.844 [2024-10-01 15:59:04.251291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.844 [2024-10-01 15:59:04.251298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.844 [2024-10-01 15:59:04.251307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.844 [2024-10-01 15:59:04.251313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.844 [2024-10-01 15:59:04.251320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.844 [2024-10-01 15:59:04.251447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.844 [2024-10-01 15:59:04.251456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.844 [2024-10-01 15:59:04.260912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.844 [2024-10-01 15:59:04.260933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.844 [2024-10-01 15:59:04.261048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-10-01 15:59:04.261061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.844 [2024-10-01 15:59:04.261068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.844 [2024-10-01 15:59:04.261216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-10-01 15:59:04.261225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.844 [2024-10-01 15:59:04.261232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.844 [2024-10-01 15:59:04.261243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.844 [2024-10-01 15:59:04.261257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.844 [2024-10-01 15:59:04.261267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.844 [2024-10-01 15:59:04.261273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.844 [2024-10-01 15:59:04.261279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.844 [2024-10-01 15:59:04.261288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.844 [2024-10-01 15:59:04.261293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.844 [2024-10-01 15:59:04.261299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.844 [2024-10-01 15:59:04.261313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.844 [2024-10-01 15:59:04.261319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.844 [2024-10-01 15:59:04.272093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.844 [2024-10-01 15:59:04.272114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.844 [2024-10-01 15:59:04.272350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-10-01 15:59:04.272362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.844 [2024-10-01 15:59:04.272370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.844 [2024-10-01 15:59:04.272515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-10-01 15:59:04.272524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.844 [2024-10-01 15:59:04.272531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.844 [2024-10-01 15:59:04.272543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.844 [2024-10-01 15:59:04.272553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.844 [2024-10-01 15:59:04.272562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.844 [2024-10-01 15:59:04.272568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.844 [2024-10-01 15:59:04.272574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.844 [2024-10-01 15:59:04.272583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.844 [2024-10-01 15:59:04.272589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.844 [2024-10-01 15:59:04.272595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.844 [2024-10-01 15:59:04.272609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.844 [2024-10-01 15:59:04.272615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.844 [2024-10-01 15:59:04.283457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.844 [2024-10-01 15:59:04.283478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.844 [2024-10-01 15:59:04.283917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-10-01 15:59:04.283939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.844 [2024-10-01 15:59:04.283947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.844 [2024-10-01 15:59:04.284084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-10-01 15:59:04.284094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.844 [2024-10-01 15:59:04.284101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.844 [2024-10-01 15:59:04.284246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.844 [2024-10-01 15:59:04.284258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.844 [2024-10-01 15:59:04.284284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.844 [2024-10-01 15:59:04.284292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.844 [2024-10-01 15:59:04.284298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.844 [2024-10-01 15:59:04.284307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.844 [2024-10-01 15:59:04.284313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.844 [2024-10-01 15:59:04.284319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.844 [2024-10-01 15:59:04.284333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.844 [2024-10-01 15:59:04.284339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.844 [2024-10-01 15:59:04.293539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.844 [2024-10-01 15:59:04.293568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.844 [2024-10-01 15:59:04.293776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-10-01 15:59:04.293789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.844 [2024-10-01 15:59:04.293796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.844 [2024-10-01 15:59:04.293993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.844 [2024-10-01 15:59:04.294004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.844 [2024-10-01 15:59:04.294011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.844 [2024-10-01 15:59:04.294020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.844 [2024-10-01 15:59:04.294031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.845 [2024-10-01 15:59:04.294039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.845 [2024-10-01 15:59:04.294045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.845 [2024-10-01 15:59:04.294051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.845 [2024-10-01 15:59:04.294064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.845 [2024-10-01 15:59:04.294071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.845 [2024-10-01 15:59:04.294080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.845 [2024-10-01 15:59:04.294086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.845 [2024-10-01 15:59:04.294097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.845 [2024-10-01 15:59:04.304800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.845 [2024-10-01 15:59:04.304821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.845 [2024-10-01 15:59:04.305057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-10-01 15:59:04.305070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.845 [2024-10-01 15:59:04.305078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.845 [2024-10-01 15:59:04.305269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-10-01 15:59:04.305280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.845 [2024-10-01 15:59:04.305287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.845 [2024-10-01 15:59:04.305299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.845 [2024-10-01 15:59:04.305308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.845 [2024-10-01 15:59:04.305318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.845 [2024-10-01 15:59:04.305324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.845 [2024-10-01 15:59:04.305330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.845 [2024-10-01 15:59:04.305339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.845 [2024-10-01 15:59:04.305345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.845 [2024-10-01 15:59:04.305351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.845 [2024-10-01 15:59:04.305364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.845 [2024-10-01 15:59:04.305371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.845 [2024-10-01 15:59:04.317445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.845 [2024-10-01 15:59:04.317467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.845 [2024-10-01 15:59:04.317832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-10-01 15:59:04.317848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.845 [2024-10-01 15:59:04.317856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.845 [2024-10-01 15:59:04.318077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-10-01 15:59:04.318088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.845 [2024-10-01 15:59:04.318095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.845 [2024-10-01 15:59:04.318350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.845 [2024-10-01 15:59:04.318363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.845 [2024-10-01 15:59:04.318534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.845 [2024-10-01 15:59:04.318545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.845 [2024-10-01 15:59:04.318552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.845 [2024-10-01 15:59:04.318561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.845 [2024-10-01 15:59:04.318568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.845 [2024-10-01 15:59:04.318574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.845 [2024-10-01 15:59:04.318716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.845 [2024-10-01 15:59:04.318726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.845 [2024-10-01 15:59:04.328705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.845 [2024-10-01 15:59:04.328726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.845 [2024-10-01 15:59:04.328955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-10-01 15:59:04.328969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.845 [2024-10-01 15:59:04.328977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.845 [2024-10-01 15:59:04.329142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-10-01 15:59:04.329153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.845 [2024-10-01 15:59:04.329160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.845 [2024-10-01 15:59:04.329353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.845 [2024-10-01 15:59:04.329366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.845 [2024-10-01 15:59:04.329459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.845 [2024-10-01 15:59:04.329467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.845 [2024-10-01 15:59:04.329473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.845 [2024-10-01 15:59:04.329482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.845 [2024-10-01 15:59:04.329488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.845 [2024-10-01 15:59:04.329495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.845 [2024-10-01 15:59:04.329515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.845 [2024-10-01 15:59:04.329523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.845 [2024-10-01 15:59:04.339779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.845 [2024-10-01 15:59:04.339801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.845 [2024-10-01 15:59:04.340011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-10-01 15:59:04.340024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.845 [2024-10-01 15:59:04.340035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.845 [2024-10-01 15:59:04.340182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-10-01 15:59:04.340192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.845 [2024-10-01 15:59:04.340198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.845 [2024-10-01 15:59:04.340330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.845 [2024-10-01 15:59:04.340341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.845 [2024-10-01 15:59:04.340480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.845 [2024-10-01 15:59:04.340489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.845 [2024-10-01 15:59:04.340496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.845 [2024-10-01 15:59:04.340504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.845 [2024-10-01 15:59:04.340510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.845 [2024-10-01 15:59:04.340517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.845 [2024-10-01 15:59:04.340546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.845 [2024-10-01 15:59:04.340554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.845 [2024-10-01 15:59:04.350900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.845 [2024-10-01 15:59:04.350921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.845 [2024-10-01 15:59:04.351236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-10-01 15:59:04.351251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.845 [2024-10-01 15:59:04.351258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.845 [2024-10-01 15:59:04.351458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.845 [2024-10-01 15:59:04.351468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.845 [2024-10-01 15:59:04.351475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.845 [2024-10-01 15:59:04.351647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.845 [2024-10-01 15:59:04.351660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.845 [2024-10-01 15:59:04.352480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.845 [2024-10-01 15:59:04.352493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.845 [2024-10-01 15:59:04.352501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.845 [2024-10-01 15:59:04.352510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.845 [2024-10-01 15:59:04.352516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.845 [2024-10-01 15:59:04.352522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.845 [2024-10-01 15:59:04.352834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.846 [2024-10-01 15:59:04.352845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.846 [2024-10-01 15:59:04.360980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.846 [2024-10-01 15:59:04.361009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.846 [2024-10-01 15:59:04.361235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-10-01 15:59:04.361247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.846 [2024-10-01 15:59:04.361254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.846 [2024-10-01 15:59:04.361410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-10-01 15:59:04.361420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.846 [2024-10-01 15:59:04.361426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.846 [2024-10-01 15:59:04.361435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.846 [2024-10-01 15:59:04.361447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.846 [2024-10-01 15:59:04.361454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.846 [2024-10-01 15:59:04.361460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.846 [2024-10-01 15:59:04.361466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.846 [2024-10-01 15:59:04.361479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.846 [2024-10-01 15:59:04.361485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.846 [2024-10-01 15:59:04.361491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.846 [2024-10-01 15:59:04.361497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.846 [2024-10-01 15:59:04.361509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.846 [2024-10-01 15:59:04.371693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.846 [2024-10-01 15:59:04.371713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.846 [2024-10-01 15:59:04.371874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-10-01 15:59:04.371887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.846 [2024-10-01 15:59:04.371894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.846 [2024-10-01 15:59:04.372035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-10-01 15:59:04.372045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.846 [2024-10-01 15:59:04.372051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.846 [2024-10-01 15:59:04.372062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.846 [2024-10-01 15:59:04.372072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.846 [2024-10-01 15:59:04.372081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.846 [2024-10-01 15:59:04.372091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.846 [2024-10-01 15:59:04.372097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.846 [2024-10-01 15:59:04.372106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.846 [2024-10-01 15:59:04.372112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.846 [2024-10-01 15:59:04.372118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.846 [2024-10-01 15:59:04.372131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.846 [2024-10-01 15:59:04.372138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.846 [2024-10-01 15:59:04.381814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.846 [2024-10-01 15:59:04.381835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.846 [2024-10-01 15:59:04.382077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-10-01 15:59:04.382090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.846 [2024-10-01 15:59:04.382097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.846 [2024-10-01 15:59:04.382239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-10-01 15:59:04.382249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.846 [2024-10-01 15:59:04.382256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.846 [2024-10-01 15:59:04.382267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.846 [2024-10-01 15:59:04.382276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.846 [2024-10-01 15:59:04.382286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.846 [2024-10-01 15:59:04.382292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.846 [2024-10-01 15:59:04.382299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.846 [2024-10-01 15:59:04.382307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.846 [2024-10-01 15:59:04.382314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.846 [2024-10-01 15:59:04.382320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.846 [2024-10-01 15:59:04.382333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.846 [2024-10-01 15:59:04.382339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.846 [2024-10-01 15:59:04.392878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.846 [2024-10-01 15:59:04.392900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.846 [2024-10-01 15:59:04.393136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-10-01 15:59:04.393148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.846 [2024-10-01 15:59:04.393156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.846 [2024-10-01 15:59:04.393302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-10-01 15:59:04.393312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.846 [2024-10-01 15:59:04.393319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.846 [2024-10-01 15:59:04.393331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.846 [2024-10-01 15:59:04.393340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.846 [2024-10-01 15:59:04.393349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.846 [2024-10-01 15:59:04.393355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.846 [2024-10-01 15:59:04.393362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.846 [2024-10-01 15:59:04.393370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.846 [2024-10-01 15:59:04.393376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.846 [2024-10-01 15:59:04.393382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.846 [2024-10-01 15:59:04.393396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.846 [2024-10-01 15:59:04.393402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.846 [2024-10-01 15:59:04.404704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.846 [2024-10-01 15:59:04.404725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.846 [2024-10-01 15:59:04.404923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-10-01 15:59:04.404937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.846 [2024-10-01 15:59:04.404945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.846 [2024-10-01 15:59:04.405090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-10-01 15:59:04.405100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.846 [2024-10-01 15:59:04.405107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.846 [2024-10-01 15:59:04.405347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.846 [2024-10-01 15:59:04.405360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.846 [2024-10-01 15:59:04.405396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.846 [2024-10-01 15:59:04.405404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.846 [2024-10-01 15:59:04.405411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.846 [2024-10-01 15:59:04.405420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.846 [2024-10-01 15:59:04.405425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.846 [2024-10-01 15:59:04.405432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.846 [2024-10-01 15:59:04.405560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.846 [2024-10-01 15:59:04.405572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.846 [2024-10-01 15:59:04.414883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.846 [2024-10-01 15:59:04.414903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.846 [2024-10-01 15:59:04.415135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-10-01 15:59:04.415148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.846 [2024-10-01 15:59:04.415155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.846 [2024-10-01 15:59:04.415363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.846 [2024-10-01 15:59:04.415374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.846 [2024-10-01 15:59:04.415380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.846 [2024-10-01 15:59:04.415392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.847 [2024-10-01 15:59:04.415402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.847 [2024-10-01 15:59:04.415411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.847 [2024-10-01 15:59:04.415418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.847 [2024-10-01 15:59:04.415424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.847 [2024-10-01 15:59:04.415432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.847 [2024-10-01 15:59:04.415438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.847 [2024-10-01 15:59:04.415444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.847 [2024-10-01 15:59:04.415458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.847 [2024-10-01 15:59:04.415464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.847 [2024-10-01 15:59:04.427543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.847 [2024-10-01 15:59:04.427564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.847 [2024-10-01 15:59:04.427713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-10-01 15:59:04.427725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.847 [2024-10-01 15:59:04.427732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.847 [2024-10-01 15:59:04.427948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-10-01 15:59:04.427959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.847 [2024-10-01 15:59:04.427966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.847 [2024-10-01 15:59:04.427985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.847 [2024-10-01 15:59:04.427995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.847 [2024-10-01 15:59:04.428004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.847 [2024-10-01 15:59:04.428011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.847 [2024-10-01 15:59:04.428020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.847 [2024-10-01 15:59:04.428029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.847 [2024-10-01 15:59:04.428034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.847 [2024-10-01 15:59:04.428040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.847 [2024-10-01 15:59:04.428054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.847 [2024-10-01 15:59:04.428060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.847 [2024-10-01 15:59:04.438226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.847 [2024-10-01 15:59:04.438247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.847 [2024-10-01 15:59:04.438458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-10-01 15:59:04.438470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.847 [2024-10-01 15:59:04.438477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.847 [2024-10-01 15:59:04.438637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-10-01 15:59:04.438648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.847 [2024-10-01 15:59:04.438654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.847 [2024-10-01 15:59:04.438666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.847 [2024-10-01 15:59:04.438674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.847 [2024-10-01 15:59:04.438685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.847 [2024-10-01 15:59:04.438691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.847 [2024-10-01 15:59:04.438698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.847 [2024-10-01 15:59:04.438706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.847 [2024-10-01 15:59:04.438712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.847 [2024-10-01 15:59:04.438718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.847 [2024-10-01 15:59:04.438731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.847 [2024-10-01 15:59:04.438738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.847 [2024-10-01 15:59:04.449969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.847 [2024-10-01 15:59:04.449991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.847 [2024-10-01 15:59:04.450162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-10-01 15:59:04.450175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.847 [2024-10-01 15:59:04.450183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.847 [2024-10-01 15:59:04.450395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-10-01 15:59:04.450404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.847 [2024-10-01 15:59:04.450415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.847 [2024-10-01 15:59:04.450426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.847 [2024-10-01 15:59:04.450435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.847 [2024-10-01 15:59:04.450445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.847 [2024-10-01 15:59:04.450451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.847 [2024-10-01 15:59:04.450457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.847 [2024-10-01 15:59:04.450467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.847 [2024-10-01 15:59:04.450473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.847 [2024-10-01 15:59:04.450478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.847 [2024-10-01 15:59:04.450491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.847 [2024-10-01 15:59:04.450498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.847 [2024-10-01 15:59:04.461472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.847 [2024-10-01 15:59:04.461494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.847 [2024-10-01 15:59:04.461728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-10-01 15:59:04.461741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.847 [2024-10-01 15:59:04.461748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.847 [2024-10-01 15:59:04.461884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-10-01 15:59:04.461894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.847 [2024-10-01 15:59:04.461901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.847 [2024-10-01 15:59:04.461912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.847 [2024-10-01 15:59:04.461921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.847 [2024-10-01 15:59:04.461931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.847 [2024-10-01 15:59:04.461937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.847 [2024-10-01 15:59:04.461944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.847 [2024-10-01 15:59:04.461953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.847 [2024-10-01 15:59:04.461958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.847 [2024-10-01 15:59:04.461964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.847 [2024-10-01 15:59:04.461978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.847 [2024-10-01 15:59:04.461984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.847 [2024-10-01 15:59:04.473046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.847 [2024-10-01 15:59:04.473069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.847 [2024-10-01 15:59:04.473234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-10-01 15:59:04.473246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.847 [2024-10-01 15:59:04.473253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.847 [2024-10-01 15:59:04.473445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.847 [2024-10-01 15:59:04.473455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.847 [2024-10-01 15:59:04.473462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.847 [2024-10-01 15:59:04.473473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.847 [2024-10-01 15:59:04.473482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.847 [2024-10-01 15:59:04.473492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.847 [2024-10-01 15:59:04.473498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.847 [2024-10-01 15:59:04.473505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.847 [2024-10-01 15:59:04.473512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.847 [2024-10-01 15:59:04.473518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.847 [2024-10-01 15:59:04.473524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.847 [2024-10-01 15:59:04.473537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.847 [2024-10-01 15:59:04.473544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.847 [2024-10-01 15:59:04.484612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.847 [2024-10-01 15:59:04.484635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.847 [2024-10-01 15:59:04.484912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-10-01 15:59:04.484927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.848 [2024-10-01 15:59:04.484934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.848 [2024-10-01 15:59:04.485131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-10-01 15:59:04.485141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.848 [2024-10-01 15:59:04.485148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.848 [2024-10-01 15:59:04.485160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.848 [2024-10-01 15:59:04.485169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.848 [2024-10-01 15:59:04.485179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.848 [2024-10-01 15:59:04.485185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.848 [2024-10-01 15:59:04.485191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.848 [2024-10-01 15:59:04.485203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.848 [2024-10-01 15:59:04.485209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.848 [2024-10-01 15:59:04.485215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.848 [2024-10-01 15:59:04.485228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.848 [2024-10-01 15:59:04.485235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.848 [2024-10-01 15:59:04.496200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.848 [2024-10-01 15:59:04.496222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.848 [2024-10-01 15:59:04.496388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-10-01 15:59:04.496400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.848 [2024-10-01 15:59:04.496407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.848 [2024-10-01 15:59:04.496625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-10-01 15:59:04.496635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.848 [2024-10-01 15:59:04.496642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.848 [2024-10-01 15:59:04.496653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.848 [2024-10-01 15:59:04.496663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.848 [2024-10-01 15:59:04.496672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.848 [2024-10-01 15:59:04.496679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.848 [2024-10-01 15:59:04.496685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.848 [2024-10-01 15:59:04.496693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.848 [2024-10-01 15:59:04.496699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.848 [2024-10-01 15:59:04.496705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.848 [2024-10-01 15:59:04.496719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.848 [2024-10-01 15:59:04.496725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.848 [2024-10-01 15:59:04.507921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.848 [2024-10-01 15:59:04.507943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.848 [2024-10-01 15:59:04.508176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-10-01 15:59:04.508189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.848 [2024-10-01 15:59:04.508196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.848 [2024-10-01 15:59:04.508338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-10-01 15:59:04.508348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.848 [2024-10-01 15:59:04.508355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.848 [2024-10-01 15:59:04.508369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.848 [2024-10-01 15:59:04.508379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.848 [2024-10-01 15:59:04.508389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.848 [2024-10-01 15:59:04.508395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.848 [2024-10-01 15:59:04.508401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.848 [2024-10-01 15:59:04.508409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.848 [2024-10-01 15:59:04.508416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.848 [2024-10-01 15:59:04.508422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.848 [2024-10-01 15:59:04.508435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.848 [2024-10-01 15:59:04.508442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.848 [2024-10-01 15:59:04.519608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.848 [2024-10-01 15:59:04.519630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.848 [2024-10-01 15:59:04.519875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-10-01 15:59:04.519888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.848 [2024-10-01 15:59:04.519896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.848 [2024-10-01 15:59:04.520089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-10-01 15:59:04.520100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.848 [2024-10-01 15:59:04.520107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.848 [2024-10-01 15:59:04.520118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.848 [2024-10-01 15:59:04.520128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.848 [2024-10-01 15:59:04.520137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.848 [2024-10-01 15:59:04.520143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.848 [2024-10-01 15:59:04.520149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.848 [2024-10-01 15:59:04.520158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.848 [2024-10-01 15:59:04.520164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.848 [2024-10-01 15:59:04.520170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.848 [2024-10-01 15:59:04.520183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.848 [2024-10-01 15:59:04.520191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.848 [2024-10-01 15:59:04.532729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.848 [2024-10-01 15:59:04.532753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.848 [2024-10-01 15:59:04.533150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-10-01 15:59:04.533167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.848 [2024-10-01 15:59:04.533175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.848 [2024-10-01 15:59:04.533318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-10-01 15:59:04.533328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.848 [2024-10-01 15:59:04.533335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.848 [2024-10-01 15:59:04.533615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.848 [2024-10-01 15:59:04.533629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.848 [2024-10-01 15:59:04.533792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.848 [2024-10-01 15:59:04.533803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.848 [2024-10-01 15:59:04.533810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.848 [2024-10-01 15:59:04.533819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.848 [2024-10-01 15:59:04.533825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.848 [2024-10-01 15:59:04.533831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.848 [2024-10-01 15:59:04.533868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.848 [2024-10-01 15:59:04.533877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.848 [2024-10-01 15:59:04.543838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.848 [2024-10-01 15:59:04.543867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.848 [2024-10-01 15:59:04.544031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.848 [2024-10-01 15:59:04.544045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.848 [2024-10-01 15:59:04.544053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.848 [2024-10-01 15:59:04.544150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-10-01 15:59:04.544160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.849 [2024-10-01 15:59:04.544167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.849 [2024-10-01 15:59:04.544179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.849 [2024-10-01 15:59:04.544189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.849 [2024-10-01 15:59:04.544198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.849 [2024-10-01 15:59:04.544204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.849 [2024-10-01 15:59:04.544210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.849 [2024-10-01 15:59:04.544219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.849 [2024-10-01 15:59:04.544229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.849 [2024-10-01 15:59:04.544235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.849 [2024-10-01 15:59:04.544249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.849 [2024-10-01 15:59:04.544255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.849 [2024-10-01 15:59:04.555004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.849 [2024-10-01 15:59:04.555028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.849 [2024-10-01 15:59:04.555279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-10-01 15:59:04.555293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.849 [2024-10-01 15:59:04.555300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.849 [2024-10-01 15:59:04.555448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-10-01 15:59:04.555458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.849 [2024-10-01 15:59:04.555465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.849 [2024-10-01 15:59:04.555705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.849 [2024-10-01 15:59:04.555718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.849 [2024-10-01 15:59:04.555764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.849 [2024-10-01 15:59:04.555773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.849 [2024-10-01 15:59:04.555779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.849 [2024-10-01 15:59:04.555788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.849 [2024-10-01 15:59:04.555794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.849 [2024-10-01 15:59:04.555800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.849 [2024-10-01 15:59:04.555814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.849 [2024-10-01 15:59:04.555820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.849 [2024-10-01 15:59:04.565325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.849 [2024-10-01 15:59:04.565346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.849 [2024-10-01 15:59:04.565603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-10-01 15:59:04.565617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.849 [2024-10-01 15:59:04.565624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.849 [2024-10-01 15:59:04.565843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-10-01 15:59:04.565855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.849 [2024-10-01 15:59:04.565867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.849 [2024-10-01 15:59:04.566107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.849 [2024-10-01 15:59:04.566123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.849 [2024-10-01 15:59:04.566273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.849 [2024-10-01 15:59:04.566282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.849 [2024-10-01 15:59:04.566289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.849 [2024-10-01 15:59:04.566298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.849 [2024-10-01 15:59:04.566304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.849 [2024-10-01 15:59:04.566310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.849 [2024-10-01 15:59:04.566340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.849 [2024-10-01 15:59:04.566347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.849 [2024-10-01 15:59:04.577014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.849 [2024-10-01 15:59:04.577035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.849 [2024-10-01 15:59:04.577201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-10-01 15:59:04.577214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.849 [2024-10-01 15:59:04.577221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.849 [2024-10-01 15:59:04.577442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-10-01 15:59:04.577452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.849 [2024-10-01 15:59:04.577459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.849 [2024-10-01 15:59:04.577470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.849 [2024-10-01 15:59:04.577479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.849 [2024-10-01 15:59:04.577496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.849 [2024-10-01 15:59:04.577504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.849 [2024-10-01 15:59:04.577511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.849 [2024-10-01 15:59:04.577519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.849 [2024-10-01 15:59:04.577525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.849 [2024-10-01 15:59:04.577532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.849 [2024-10-01 15:59:04.577545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.849 [2024-10-01 15:59:04.577552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.849 [2024-10-01 15:59:04.589447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.849 [2024-10-01 15:59:04.589469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.849 [2024-10-01 15:59:04.589705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-10-01 15:59:04.589722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.849 [2024-10-01 15:59:04.589731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.849 [2024-10-01 15:59:04.589871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-10-01 15:59:04.589882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.849 [2024-10-01 15:59:04.589889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.849 [2024-10-01 15:59:04.589900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.849 [2024-10-01 15:59:04.589910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.849 [2024-10-01 15:59:04.589928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.849 [2024-10-01 15:59:04.589935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.849 [2024-10-01 15:59:04.589941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.849 [2024-10-01 15:59:04.589951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.849 [2024-10-01 15:59:04.589957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.849 [2024-10-01 15:59:04.589963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.849 [2024-10-01 15:59:04.589976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.849 [2024-10-01 15:59:04.589983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.849 [2024-10-01 15:59:04.600882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.849 [2024-10-01 15:59:04.600904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.849 [2024-10-01 15:59:04.601074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-10-01 15:59:04.601086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.849 [2024-10-01 15:59:04.601093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.849 [2024-10-01 15:59:04.601241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-10-01 15:59:04.601250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.849 [2024-10-01 15:59:04.601257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.849 [2024-10-01 15:59:04.601268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.849 [2024-10-01 15:59:04.601277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.849 [2024-10-01 15:59:04.601287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.849 [2024-10-01 15:59:04.601294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.849 [2024-10-01 15:59:04.601300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.849 [2024-10-01 15:59:04.601308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.849 [2024-10-01 15:59:04.601314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.849 [2024-10-01 15:59:04.601324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.849 [2024-10-01 15:59:04.601338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.849 [2024-10-01 15:59:04.601344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.849 [2024-10-01 15:59:04.612008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.849 [2024-10-01 15:59:04.612029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.849 [2024-10-01 15:59:04.612159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.849 [2024-10-01 15:59:04.612171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.849 [2024-10-01 15:59:04.612178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.849 [2024-10-01 15:59:04.612266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-10-01 15:59:04.612276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.850 [2024-10-01 15:59:04.612283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.850 [2024-10-01 15:59:04.612574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.850 [2024-10-01 15:59:04.612587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.850 [2024-10-01 15:59:04.612737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.850 [2024-10-01 15:59:04.612747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.850 [2024-10-01 15:59:04.612754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.850 [2024-10-01 15:59:04.612763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.850 [2024-10-01 15:59:04.612769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.850 [2024-10-01 15:59:04.612775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.850 [2024-10-01 15:59:04.613131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.850 [2024-10-01 15:59:04.613143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.850 [2024-10-01 15:59:04.622527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.850 [2024-10-01 15:59:04.622548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.850 [2024-10-01 15:59:04.622780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-10-01 15:59:04.622792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.850 [2024-10-01 15:59:04.622800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.850 [2024-10-01 15:59:04.622877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-10-01 15:59:04.622888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.850 [2024-10-01 15:59:04.622894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.850 [2024-10-01 15:59:04.622905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.850 [2024-10-01 15:59:04.622914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.850 [2024-10-01 15:59:04.622927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.850 [2024-10-01 15:59:04.622933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.850 [2024-10-01 15:59:04.622940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.850 [2024-10-01 15:59:04.622948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.850 [2024-10-01 15:59:04.622953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.850 [2024-10-01 15:59:04.622960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.850 [2024-10-01 15:59:04.622973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.850 [2024-10-01 15:59:04.622980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.850 [2024-10-01 15:59:04.634391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.850 [2024-10-01 15:59:04.634414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.850 [2024-10-01 15:59:04.634683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-10-01 15:59:04.634697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.850 [2024-10-01 15:59:04.634704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.850 [2024-10-01 15:59:04.634919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-10-01 15:59:04.634930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.850 [2024-10-01 15:59:04.634937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.850 [2024-10-01 15:59:04.634948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.850 [2024-10-01 15:59:04.634957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.850 [2024-10-01 15:59:04.634975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.850 [2024-10-01 15:59:04.634982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.850 [2024-10-01 15:59:04.634989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.850 [2024-10-01 15:59:04.634997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.850 [2024-10-01 15:59:04.635004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.850 [2024-10-01 15:59:04.635010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.850 [2024-10-01 15:59:04.635023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.850 [2024-10-01 15:59:04.635029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.850 [2024-10-01 15:59:04.646400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.850 [2024-10-01 15:59:04.646422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.850 [2024-10-01 15:59:04.646832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-10-01 15:59:04.646849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.850 [2024-10-01 15:59:04.646860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.850 [2024-10-01 15:59:04.646990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-10-01 15:59:04.647001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.850 [2024-10-01 15:59:04.647008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.850 [2024-10-01 15:59:04.647039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.850 [2024-10-01 15:59:04.647049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.850 [2024-10-01 15:59:04.647059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.850 [2024-10-01 15:59:04.647065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.850 [2024-10-01 15:59:04.647071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.850 [2024-10-01 15:59:04.647080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.850 [2024-10-01 15:59:04.647088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.850 [2024-10-01 15:59:04.647094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.850 [2024-10-01 15:59:04.647108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.850 [2024-10-01 15:59:04.647114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.850 [2024-10-01 15:59:04.656697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.850 [2024-10-01 15:59:04.656717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.850 [2024-10-01 15:59:04.656878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-10-01 15:59:04.656891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.850 [2024-10-01 15:59:04.656898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.850 [2024-10-01 15:59:04.657045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-10-01 15:59:04.657055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.850 [2024-10-01 15:59:04.657062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.850 [2024-10-01 15:59:04.658030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.850 [2024-10-01 15:59:04.658046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.850 [2024-10-01 15:59:04.658057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.850 [2024-10-01 15:59:04.658064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.850 [2024-10-01 15:59:04.658071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.850 [2024-10-01 15:59:04.658080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.850 [2024-10-01 15:59:04.658086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.850 [2024-10-01 15:59:04.658092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.850 [2024-10-01 15:59:04.658109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.850 [2024-10-01 15:59:04.658115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.850 [2024-10-01 15:59:04.668347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.850 [2024-10-01 15:59:04.668368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.850 [2024-10-01 15:59:04.668544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-10-01 15:59:04.668556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.850 [2024-10-01 15:59:04.668564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.850 [2024-10-01 15:59:04.668720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-10-01 15:59:04.668730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.850 [2024-10-01 15:59:04.668737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.850 [2024-10-01 15:59:04.668748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.850 [2024-10-01 15:59:04.668757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.850 [2024-10-01 15:59:04.668767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.850 [2024-10-01 15:59:04.668773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.850 [2024-10-01 15:59:04.668779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.850 [2024-10-01 15:59:04.668788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.850 [2024-10-01 15:59:04.668794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.850 [2024-10-01 15:59:04.668800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.850 [2024-10-01 15:59:04.668813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.850 [2024-10-01 15:59:04.668820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.850 [2024-10-01 15:59:04.680125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.850 [2024-10-01 15:59:04.680147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.850 [2024-10-01 15:59:04.680451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.850 [2024-10-01 15:59:04.680467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.850 [2024-10-01 15:59:04.680475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.850 [2024-10-01 15:59:04.680685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-10-01 15:59:04.680696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.851 [2024-10-01 15:59:04.680703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.851 [2024-10-01 15:59:04.680847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.851 [2024-10-01 15:59:04.680860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.851 [2024-10-01 15:59:04.680898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.851 [2024-10-01 15:59:04.680910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.851 [2024-10-01 15:59:04.680916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.851 [2024-10-01 15:59:04.680926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.851 [2024-10-01 15:59:04.680931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.851 [2024-10-01 15:59:04.680938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.851 [2024-10-01 15:59:04.680951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.851 [2024-10-01 15:59:04.680957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.851 [2024-10-01 15:59:04.691873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.851 [2024-10-01 15:59:04.691895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.851 [2024-10-01 15:59:04.692350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-10-01 15:59:04.692367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.851 [2024-10-01 15:59:04.692375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.851 [2024-10-01 15:59:04.692623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-10-01 15:59:04.692634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.851 [2024-10-01 15:59:04.692640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.851 [2024-10-01 15:59:04.692798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.851 [2024-10-01 15:59:04.692811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.851 [2024-10-01 15:59:04.692838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.851 [2024-10-01 15:59:04.692845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.851 [2024-10-01 15:59:04.692852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.851 [2024-10-01 15:59:04.692861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.851 [2024-10-01 15:59:04.692873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.851 [2024-10-01 15:59:04.692879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.851 [2024-10-01 15:59:04.693073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.851 [2024-10-01 15:59:04.693082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.851 [2024-10-01 15:59:04.702839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.851 [2024-10-01 15:59:04.702861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.851 [2024-10-01 15:59:04.703007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-10-01 15:59:04.703020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.851 [2024-10-01 15:59:04.703027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.851 [2024-10-01 15:59:04.703244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-10-01 15:59:04.703254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.851 [2024-10-01 15:59:04.703260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.851 [2024-10-01 15:59:04.703391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.851 [2024-10-01 15:59:04.703403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.851 [2024-10-01 15:59:04.703557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.851 [2024-10-01 15:59:04.703568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.851 [2024-10-01 15:59:04.703575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.851 [2024-10-01 15:59:04.703584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.851 [2024-10-01 15:59:04.703590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.851 [2024-10-01 15:59:04.703596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.851 [2024-10-01 15:59:04.703631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.851 [2024-10-01 15:59:04.703640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.851 [2024-10-01 15:59:04.712927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.851 [2024-10-01 15:59:04.712956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.851 [2024-10-01 15:59:04.713189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-10-01 15:59:04.713202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.851 [2024-10-01 15:59:04.713210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.851 [2024-10-01 15:59:04.713790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-10-01 15:59:04.713806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.851 [2024-10-01 15:59:04.713814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.851 [2024-10-01 15:59:04.713823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.851 [2024-10-01 15:59:04.714207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.851 [2024-10-01 15:59:04.714221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.851 [2024-10-01 15:59:04.714227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.851 [2024-10-01 15:59:04.714234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.851 [2024-10-01 15:59:04.714391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.851 [2024-10-01 15:59:04.714401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.851 [2024-10-01 15:59:04.714407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.851 [2024-10-01 15:59:04.714413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.851 [2024-10-01 15:59:04.714447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.851 [2024-10-01 15:59:04.725022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.851 [2024-10-01 15:59:04.725045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.851 [2024-10-01 15:59:04.725410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-10-01 15:59:04.725426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.851 [2024-10-01 15:59:04.725434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.851 [2024-10-01 15:59:04.725684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-10-01 15:59:04.725695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.851 [2024-10-01 15:59:04.725702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.851 [2024-10-01 15:59:04.725959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.851 [2024-10-01 15:59:04.725973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.851 [2024-10-01 15:59:04.726018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.851 [2024-10-01 15:59:04.726026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.851 [2024-10-01 15:59:04.726032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.851 [2024-10-01 15:59:04.726041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.851 [2024-10-01 15:59:04.726047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.851 [2024-10-01 15:59:04.726054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.851 [2024-10-01 15:59:04.726067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.851 [2024-10-01 15:59:04.726074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.851 [2024-10-01 15:59:04.735748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.851 [2024-10-01 15:59:04.735770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.851 [2024-10-01 15:59:04.736023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-10-01 15:59:04.736039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.851 [2024-10-01 15:59:04.736047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.851 [2024-10-01 15:59:04.736241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-10-01 15:59:04.736252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.851 [2024-10-01 15:59:04.736259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.851 [2024-10-01 15:59:04.736404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.851 [2024-10-01 15:59:04.736417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.851 [2024-10-01 15:59:04.736555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.851 [2024-10-01 15:59:04.736565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.851 [2024-10-01 15:59:04.736575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.851 [2024-10-01 15:59:04.736585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.851 [2024-10-01 15:59:04.736591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.851 [2024-10-01 15:59:04.736597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.851 [2024-10-01 15:59:04.736626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.851 [2024-10-01 15:59:04.736634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.851 [2024-10-01 15:59:04.746751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.851 [2024-10-01 15:59:04.746772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.851 [2024-10-01 15:59:04.746930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-10-01 15:59:04.746944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.851 [2024-10-01 15:59:04.746951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.851 [2024-10-01 15:59:04.747098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.851 [2024-10-01 15:59:04.747108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.852 [2024-10-01 15:59:04.747115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.852 [2024-10-01 15:59:04.747127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.852 [2024-10-01 15:59:04.747137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.852 [2024-10-01 15:59:04.747147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.852 [2024-10-01 15:59:04.747153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.852 [2024-10-01 15:59:04.747159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.852 [2024-10-01 15:59:04.747167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.852 [2024-10-01 15:59:04.747173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.852 [2024-10-01 15:59:04.747179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.852 [2024-10-01 15:59:04.747193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.852 [2024-10-01 15:59:04.747199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.852 [2024-10-01 15:59:04.757910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.852 [2024-10-01 15:59:04.757932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.852 [2024-10-01 15:59:04.758129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-10-01 15:59:04.758142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.852 [2024-10-01 15:59:04.758150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.852 [2024-10-01 15:59:04.758319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-10-01 15:59:04.758331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.852 [2024-10-01 15:59:04.758338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.852 [2024-10-01 15:59:04.758350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.852 [2024-10-01 15:59:04.758358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.852 [2024-10-01 15:59:04.758368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.852 [2024-10-01 15:59:04.758374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.852 [2024-10-01 15:59:04.758381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.852 [2024-10-01 15:59:04.758389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.852 [2024-10-01 15:59:04.758395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.852 [2024-10-01 15:59:04.758400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.852 [2024-10-01 15:59:04.758414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.852 [2024-10-01 15:59:04.758420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.852 [2024-10-01 15:59:04.768935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.852 [2024-10-01 15:59:04.768957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.852 [2024-10-01 15:59:04.769120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-10-01 15:59:04.769132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.852 [2024-10-01 15:59:04.769140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.852 [2024-10-01 15:59:04.769235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-10-01 15:59:04.769245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.852 [2024-10-01 15:59:04.769252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.852 [2024-10-01 15:59:04.769383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.852 [2024-10-01 15:59:04.769394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.852 [2024-10-01 15:59:04.769532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.852 [2024-10-01 15:59:04.769542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.852 [2024-10-01 15:59:04.769549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.852 [2024-10-01 15:59:04.769557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.852 [2024-10-01 15:59:04.769563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.852 [2024-10-01 15:59:04.769569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.852 [2024-10-01 15:59:04.769598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.852 [2024-10-01 15:59:04.769606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.852 [2024-10-01 15:59:04.779400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.852 [2024-10-01 15:59:04.779424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.852 [2024-10-01 15:59:04.779603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-10-01 15:59:04.779615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.852 [2024-10-01 15:59:04.779623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.852 [2024-10-01 15:59:04.779883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-10-01 15:59:04.779894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.852 [2024-10-01 15:59:04.779902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.852 [2024-10-01 15:59:04.779913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.852 [2024-10-01 15:59:04.779923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.852 [2024-10-01 15:59:04.779933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.852 [2024-10-01 15:59:04.779939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.852 [2024-10-01 15:59:04.779945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.852 [2024-10-01 15:59:04.779954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.852 [2024-10-01 15:59:04.779960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.852 [2024-10-01 15:59:04.779966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.852 [2024-10-01 15:59:04.779979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.852 [2024-10-01 15:59:04.779985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.852 [2024-10-01 15:59:04.792076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.852 [2024-10-01 15:59:04.792097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.852 [2024-10-01 15:59:04.792282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-10-01 15:59:04.792295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.852 [2024-10-01 15:59:04.792302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.852 [2024-10-01 15:59:04.792398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-10-01 15:59:04.792408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.852 [2024-10-01 15:59:04.792415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.852 [2024-10-01 15:59:04.792426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.852 [2024-10-01 15:59:04.792435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.852 [2024-10-01 15:59:04.792445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.852 [2024-10-01 15:59:04.792451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.852 [2024-10-01 15:59:04.792457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.852 [2024-10-01 15:59:04.792469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.852 [2024-10-01 15:59:04.792475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.852 [2024-10-01 15:59:04.792481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.852 [2024-10-01 15:59:04.792495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.852 [2024-10-01 15:59:04.792503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.852 [2024-10-01 15:59:04.804140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.852 [2024-10-01 15:59:04.804161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.852 [2024-10-01 15:59:04.804351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-10-01 15:59:04.804362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.852 [2024-10-01 15:59:04.804370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.852 [2024-10-01 15:59:04.804582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.852 [2024-10-01 15:59:04.804593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.852 [2024-10-01 15:59:04.804600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.852 [2024-10-01 15:59:04.804851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.852 [2024-10-01 15:59:04.804871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.852 [2024-10-01 15:59:04.805119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.852 [2024-10-01 15:59:04.805129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.852 [2024-10-01 15:59:04.805136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.853 [2024-10-01 15:59:04.805145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.853 [2024-10-01 15:59:04.805152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.853 [2024-10-01 15:59:04.805158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.853 [2024-10-01 15:59:04.805307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.853 [2024-10-01 15:59:04.805316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.853 [2024-10-01 15:59:04.815183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.853 [2024-10-01 15:59:04.815204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.853 [2024-10-01 15:59:04.815436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-10-01 15:59:04.815449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.853 [2024-10-01 15:59:04.815456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.853 [2024-10-01 15:59:04.815696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-10-01 15:59:04.815706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.853 [2024-10-01 15:59:04.815720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.853 [2024-10-01 15:59:04.815732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.853 [2024-10-01 15:59:04.815741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.853 [2024-10-01 15:59:04.815750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.853 [2024-10-01 15:59:04.815756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.853 [2024-10-01 15:59:04.815762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.853 [2024-10-01 15:59:04.815770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.853 [2024-10-01 15:59:04.815776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.853 [2024-10-01 15:59:04.815783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.853 [2024-10-01 15:59:04.815796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.853 [2024-10-01 15:59:04.815802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.853 [2024-10-01 15:59:04.826641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.853 [2024-10-01 15:59:04.826665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.853 [2024-10-01 15:59:04.827075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-10-01 15:59:04.827093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.853 [2024-10-01 15:59:04.827101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.853 [2024-10-01 15:59:04.827246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-10-01 15:59:04.827256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.853 [2024-10-01 15:59:04.827263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.853 [2024-10-01 15:59:04.827516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.853 [2024-10-01 15:59:04.827530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.853 [2024-10-01 15:59:04.827678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.853 [2024-10-01 15:59:04.827688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.853 [2024-10-01 15:59:04.827696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.853 [2024-10-01 15:59:04.827706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.853 [2024-10-01 15:59:04.827712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.853 [2024-10-01 15:59:04.827718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.853 [2024-10-01 15:59:04.827748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.853 [2024-10-01 15:59:04.827755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.853 [2024-10-01 15:59:04.839087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.853 [2024-10-01 15:59:04.839108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.853 [2024-10-01 15:59:04.839277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-10-01 15:59:04.839289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.853 [2024-10-01 15:59:04.839296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.853 [2024-10-01 15:59:04.839467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-10-01 15:59:04.839477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.853 [2024-10-01 15:59:04.839483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.853 [2024-10-01 15:59:04.839495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.853 [2024-10-01 15:59:04.839504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.853 [2024-10-01 15:59:04.839515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.853 [2024-10-01 15:59:04.839521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.853 [2024-10-01 15:59:04.839527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.853 [2024-10-01 15:59:04.839536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.853 [2024-10-01 15:59:04.839541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.853 [2024-10-01 15:59:04.839547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.853 [2024-10-01 15:59:04.839561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.853 [2024-10-01 15:59:04.839568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.853 [2024-10-01 15:59:04.850610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.853 [2024-10-01 15:59:04.850633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.853 [2024-10-01 15:59:04.850895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-10-01 15:59:04.850910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.853 [2024-10-01 15:59:04.850919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.853 [2024-10-01 15:59:04.851113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-10-01 15:59:04.851124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.853 [2024-10-01 15:59:04.851132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.853 [2024-10-01 15:59:04.851244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.853 [2024-10-01 15:59:04.851258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.853 [2024-10-01 15:59:04.851418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.853 [2024-10-01 15:59:04.851431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.853 [2024-10-01 15:59:04.851438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.853 [2024-10-01 15:59:04.851448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.853 [2024-10-01 15:59:04.851459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.853 [2024-10-01 15:59:04.851465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.853 [2024-10-01 15:59:04.851494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.853 [2024-10-01 15:59:04.851502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.853 [2024-10-01 15:59:04.861624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.853 [2024-10-01 15:59:04.861646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.853 [2024-10-01 15:59:04.861894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-10-01 15:59:04.861909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.853 [2024-10-01 15:59:04.861917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.853 [2024-10-01 15:59:04.862017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-10-01 15:59:04.862029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.853 [2024-10-01 15:59:04.862035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.853 [2024-10-01 15:59:04.862047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.853 [2024-10-01 15:59:04.862057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.853 [2024-10-01 15:59:04.862067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.853 [2024-10-01 15:59:04.862073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.853 [2024-10-01 15:59:04.862080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.853 [2024-10-01 15:59:04.862089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.853 [2024-10-01 15:59:04.862095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.853 [2024-10-01 15:59:04.862101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.853 [2024-10-01 15:59:04.862116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.853 [2024-10-01 15:59:04.862123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.853 [2024-10-01 15:59:04.871793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.853 [2024-10-01 15:59:04.871816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.853 [2024-10-01 15:59:04.871993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-10-01 15:59:04.872008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.853 [2024-10-01 15:59:04.872016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.853 [2024-10-01 15:59:04.872163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.853 [2024-10-01 15:59:04.872174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.853 [2024-10-01 15:59:04.872181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.853 [2024-10-01 15:59:04.872197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.853 [2024-10-01 15:59:04.872207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.853 [2024-10-01 15:59:04.872217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.853 [2024-10-01 15:59:04.872224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.853 [2024-10-01 15:59:04.872230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.853 [2024-10-01 15:59:04.872239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.853 [2024-10-01 15:59:04.872245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.853 [2024-10-01 15:59:04.872252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.853 [2024-10-01 15:59:04.872265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.853 [2024-10-01 15:59:04.872272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.854 [2024-10-01 15:59:04.883002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.854 [2024-10-01 15:59:04.883025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.854 [2024-10-01 15:59:04.883196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-10-01 15:59:04.883209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.854 [2024-10-01 15:59:04.883217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.854 [2024-10-01 15:59:04.883317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-10-01 15:59:04.883328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.854 [2024-10-01 15:59:04.883336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.854 [2024-10-01 15:59:04.883347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.854 [2024-10-01 15:59:04.883356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.854 [2024-10-01 15:59:04.883374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.854 [2024-10-01 15:59:04.883381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.854 [2024-10-01 15:59:04.883389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.854 [2024-10-01 15:59:04.883399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.854 [2024-10-01 15:59:04.883405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.854 [2024-10-01 15:59:04.883411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.854 [2024-10-01 15:59:04.883426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.854 [2024-10-01 15:59:04.883433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.854 [2024-10-01 15:59:04.893538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.854 [2024-10-01 15:59:04.893561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.854 [2024-10-01 15:59:04.893715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-10-01 15:59:04.893732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.854 [2024-10-01 15:59:04.893740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.854 [2024-10-01 15:59:04.893936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-10-01 15:59:04.893948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.854 [2024-10-01 15:59:04.893955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.854 [2024-10-01 15:59:04.893966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.854 [2024-10-01 15:59:04.893976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.854 [2024-10-01 15:59:04.893987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.854 [2024-10-01 15:59:04.893994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.854 [2024-10-01 15:59:04.894000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.854 [2024-10-01 15:59:04.894009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.854 [2024-10-01 15:59:04.894015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.854 [2024-10-01 15:59:04.894022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.854 [2024-10-01 15:59:04.894037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.854 [2024-10-01 15:59:04.894045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.854 [2024-10-01 15:59:04.905066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.854 [2024-10-01 15:59:04.905087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.854 [2024-10-01 15:59:04.905204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-10-01 15:59:04.905218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.854 [2024-10-01 15:59:04.905226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.854 [2024-10-01 15:59:04.905322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-10-01 15:59:04.905332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.854 [2024-10-01 15:59:04.905340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.854 [2024-10-01 15:59:04.905351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.854 [2024-10-01 15:59:04.905361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.854 [2024-10-01 15:59:04.905371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.854 [2024-10-01 15:59:04.905377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.854 [2024-10-01 15:59:04.905385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.854 [2024-10-01 15:59:04.905394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.854 [2024-10-01 15:59:04.905400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.854 [2024-10-01 15:59:04.905409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.854 [2024-10-01 15:59:04.905423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.854 [2024-10-01 15:59:04.905429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.854 [2024-10-01 15:59:04.916904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.854 [2024-10-01 15:59:04.916928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.854 [2024-10-01 15:59:04.917835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-10-01 15:59:04.917855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.854 [2024-10-01 15:59:04.917868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.854 [2024-10-01 15:59:04.917949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-10-01 15:59:04.917959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.854 [2024-10-01 15:59:04.917966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.854 [2024-10-01 15:59:04.918515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.854 [2024-10-01 15:59:04.918532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.854 [2024-10-01 15:59:04.918715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.854 [2024-10-01 15:59:04.918727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.854 [2024-10-01 15:59:04.918734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.854 [2024-10-01 15:59:04.918745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.854 [2024-10-01 15:59:04.918751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.854 [2024-10-01 15:59:04.918758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.854 [2024-10-01 15:59:04.918789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.854 [2024-10-01 15:59:04.918797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.854 [2024-10-01 15:59:04.928719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.854 [2024-10-01 15:59:04.928742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.854 [2024-10-01 15:59:04.929076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-10-01 15:59:04.929094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.854 [2024-10-01 15:59:04.929102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.854 [2024-10-01 15:59:04.929260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-10-01 15:59:04.929270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.854 [2024-10-01 15:59:04.929277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.854 [2024-10-01 15:59:04.929420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.854 [2024-10-01 15:59:04.929438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.854 [2024-10-01 15:59:04.929575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.854 [2024-10-01 15:59:04.929588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.854 [2024-10-01 15:59:04.929594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.854 [2024-10-01 15:59:04.929605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.854 [2024-10-01 15:59:04.929612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.854 [2024-10-01 15:59:04.929618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.854 [2024-10-01 15:59:04.929648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.854 [2024-10-01 15:59:04.929656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.854 [2024-10-01 15:59:04.939639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.854 [2024-10-01 15:59:04.939661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.854 [2024-10-01 15:59:04.939981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-10-01 15:59:04.939999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.854 [2024-10-01 15:59:04.940008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.854 [2024-10-01 15:59:04.940097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.854 [2024-10-01 15:59:04.940108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.854 [2024-10-01 15:59:04.940116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.854 [2024-10-01 15:59:04.940260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.854 [2024-10-01 15:59:04.940273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.854 [2024-10-01 15:59:04.940297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.854 [2024-10-01 15:59:04.940304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.854 [2024-10-01 15:59:04.940311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.854 [2024-10-01 15:59:04.940322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.854 [2024-10-01 15:59:04.940328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.854 [2024-10-01 15:59:04.940334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.854 [2024-10-01 15:59:04.940347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.854 [2024-10-01 15:59:04.940354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.854 [2024-10-01 15:59:04.950870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.855 [2024-10-01 15:59:04.950892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.855 [2024-10-01 15:59:04.951078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-10-01 15:59:04.951092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.855 [2024-10-01 15:59:04.951104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.855 [2024-10-01 15:59:04.951201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-10-01 15:59:04.951212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.855 [2024-10-01 15:59:04.951219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.855 [2024-10-01 15:59:04.951231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.855 [2024-10-01 15:59:04.951240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.855 [2024-10-01 15:59:04.951250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.855 [2024-10-01 15:59:04.951257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.855 [2024-10-01 15:59:04.951264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.855 [2024-10-01 15:59:04.951272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.855 [2024-10-01 15:59:04.951279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.855 [2024-10-01 15:59:04.951285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.855 [2024-10-01 15:59:04.951299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.855 [2024-10-01 15:59:04.951306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.855 [2024-10-01 15:59:04.963212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.855 [2024-10-01 15:59:04.963235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.855 [2024-10-01 15:59:04.963397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-10-01 15:59:04.963411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.855 [2024-10-01 15:59:04.963418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.855 [2024-10-01 15:59:04.963578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-10-01 15:59:04.963590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.855 [2024-10-01 15:59:04.963597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.855 [2024-10-01 15:59:04.963616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.855 [2024-10-01 15:59:04.963628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.855 [2024-10-01 15:59:04.963639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.855 [2024-10-01 15:59:04.963645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.855 [2024-10-01 15:59:04.963652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.855 [2024-10-01 15:59:04.963661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.855 [2024-10-01 15:59:04.963667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.855 [2024-10-01 15:59:04.963674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.855 [2024-10-01 15:59:04.963692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.855 [2024-10-01 15:59:04.963699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.855 [2024-10-01 15:59:04.973921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.855 [2024-10-01 15:59:04.973943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.855 [2024-10-01 15:59:04.974106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-10-01 15:59:04.974118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.855 [2024-10-01 15:59:04.974126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.855 [2024-10-01 15:59:04.974345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-10-01 15:59:04.974356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.855 [2024-10-01 15:59:04.974364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.855 [2024-10-01 15:59:04.974376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.855 [2024-10-01 15:59:04.974386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.855 [2024-10-01 15:59:04.974396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.855 [2024-10-01 15:59:04.974403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.855 [2024-10-01 15:59:04.974409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.855 [2024-10-01 15:59:04.974418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.855 [2024-10-01 15:59:04.974424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.855 [2024-10-01 15:59:04.974432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.855 [2024-10-01 15:59:04.974445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.855 [2024-10-01 15:59:04.974453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.855 [2024-10-01 15:59:04.986577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.855 [2024-10-01 15:59:04.986600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.855 [2024-10-01 15:59:04.986895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-10-01 15:59:04.986913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.855 [2024-10-01 15:59:04.986921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.855 [2024-10-01 15:59:04.987088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-10-01 15:59:04.987099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.855 [2024-10-01 15:59:04.987106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.855 [2024-10-01 15:59:04.987458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.855 [2024-10-01 15:59:04.987474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.855 [2024-10-01 15:59:04.987631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.855 [2024-10-01 15:59:04.987644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.855 [2024-10-01 15:59:04.987651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.855 [2024-10-01 15:59:04.987660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.855 [2024-10-01 15:59:04.987667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.855 [2024-10-01 15:59:04.987674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.855 [2024-10-01 15:59:04.987816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.855 [2024-10-01 15:59:04.987827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.855 [2024-10-01 15:59:04.998219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.855 [2024-10-01 15:59:04.998241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.855 [2024-10-01 15:59:04.998618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-10-01 15:59:04.998635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.855 [2024-10-01 15:59:04.998645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.855 [2024-10-01 15:59:04.998824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.855 [2024-10-01 15:59:04.998836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.855 [2024-10-01 15:59:04.998843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.856 [2024-10-01 15:59:04.999099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.856 [2024-10-01 15:59:04.999114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.856 [2024-10-01 15:59:04.999151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.856 [2024-10-01 15:59:04.999160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.856 [2024-10-01 15:59:04.999167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.856 [2024-10-01 15:59:04.999176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.856 [2024-10-01 15:59:04.999182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.856 [2024-10-01 15:59:04.999189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.856 [2024-10-01 15:59:04.999203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.856 [2024-10-01 15:59:04.999210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.856 11370.69 IOPS, 44.42 MiB/s [2024-10-01 15:59:05.010051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.856 [2024-10-01 15:59:05.010074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.856 [2024-10-01 15:59:05.010291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-10-01 15:59:05.010306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.856 [2024-10-01 15:59:05.010314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.856 [2024-10-01 15:59:05.010535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-10-01 15:59:05.010548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.856 [2024-10-01 15:59:05.010555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.856 [2024-10-01 15:59:05.010795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.856 [2024-10-01 15:59:05.010810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.856 [2024-10-01 15:59:05.010848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.856 [2024-10-01 15:59:05.010857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.856 [2024-10-01 15:59:05.010870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.856 [2024-10-01 15:59:05.010879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.856 [2024-10-01 15:59:05.010886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.856 [2024-10-01 15:59:05.010893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.856 [2024-10-01 15:59:05.010907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.856 [2024-10-01 15:59:05.010915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.856 [2024-10-01 15:59:05.022702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.856 [2024-10-01 15:59:05.022724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.856 [2024-10-01 15:59:05.023060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-10-01 15:59:05.023078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.856 [2024-10-01 15:59:05.023086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.856 [2024-10-01 15:59:05.023212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-10-01 15:59:05.023223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.856 [2024-10-01 15:59:05.023230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.856 [2024-10-01 15:59:05.023512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.856 [2024-10-01 15:59:05.023528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.856 [2024-10-01 15:59:05.023679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.856 [2024-10-01 15:59:05.023690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.856 [2024-10-01 15:59:05.023698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.856 [2024-10-01 15:59:05.023707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.856 [2024-10-01 15:59:05.023714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.856 [2024-10-01 15:59:05.023721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.856 [2024-10-01 15:59:05.023753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.856 [2024-10-01 15:59:05.023764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.856 [2024-10-01 15:59:05.033746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.856 [2024-10-01 15:59:05.033768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.856 [2024-10-01 15:59:05.034104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-10-01 15:59:05.034122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.856 [2024-10-01 15:59:05.034130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.856 [2024-10-01 15:59:05.034296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-10-01 15:59:05.034307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.856 [2024-10-01 15:59:05.034314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.856 [2024-10-01 15:59:05.034596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.856 [2024-10-01 15:59:05.034613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.856 [2024-10-01 15:59:05.034764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.856 [2024-10-01 15:59:05.034776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.856 [2024-10-01 15:59:05.034783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.856 [2024-10-01 15:59:05.034793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.856 [2024-10-01 15:59:05.034800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.856 [2024-10-01 15:59:05.034806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.856 [2024-10-01 15:59:05.034838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.856 [2024-10-01 15:59:05.034845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.856 [2024-10-01 15:59:05.045110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.856 [2024-10-01 15:59:05.045133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.856 [2024-10-01 15:59:05.045539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-10-01 15:59:05.045557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.856 [2024-10-01 15:59:05.045565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.856 [2024-10-01 15:59:05.045657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-10-01 15:59:05.045669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.856 [2024-10-01 15:59:05.045676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.856 [2024-10-01 15:59:05.045832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.856 [2024-10-01 15:59:05.045846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.856 [2024-10-01 15:59:05.045878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.856 [2024-10-01 15:59:05.045890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.856 [2024-10-01 15:59:05.045897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.856 [2024-10-01 15:59:05.045907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.856 [2024-10-01 15:59:05.045913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.856 [2024-10-01 15:59:05.045919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.856 [2024-10-01 15:59:05.045933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.856 [2024-10-01 15:59:05.045941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.856 [2024-10-01 15:59:05.056664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.856 [2024-10-01 15:59:05.056686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.856 [2024-10-01 15:59:05.056993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-10-01 15:59:05.057012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.856 [2024-10-01 15:59:05.057020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.856 [2024-10-01 15:59:05.057185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.856 [2024-10-01 15:59:05.057196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.856 [2024-10-01 15:59:05.057203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.856 [2024-10-01 15:59:05.057233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.856 [2024-10-01 15:59:05.057244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.856 [2024-10-01 15:59:05.057254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.856 [2024-10-01 15:59:05.057261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.856 [2024-10-01 15:59:05.057268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.856 [2024-10-01 15:59:05.057278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.856 [2024-10-01 15:59:05.057284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.856 [2024-10-01 15:59:05.057291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.856 [2024-10-01 15:59:05.057304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.857 [2024-10-01 15:59:05.057311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.857 [2024-10-01 15:59:05.067086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.857 [2024-10-01 15:59:05.067109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.857 [2024-10-01 15:59:05.067276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-10-01 15:59:05.067290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.857 [2024-10-01 15:59:05.067298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.857 [2024-10-01 15:59:05.067458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-10-01 15:59:05.067473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.857 [2024-10-01 15:59:05.067482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.857 [2024-10-01 15:59:05.067938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.857 [2024-10-01 15:59:05.067955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.857 [2024-10-01 15:59:05.068124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.857 [2024-10-01 15:59:05.068136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.857 [2024-10-01 15:59:05.068143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.857 [2024-10-01 15:59:05.068154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.857 [2024-10-01 15:59:05.068160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.857 [2024-10-01 15:59:05.068168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.857 [2024-10-01 15:59:05.068199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.857 [2024-10-01 15:59:05.068207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.857 [2024-10-01 15:59:05.079456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.857 [2024-10-01 15:59:05.079480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.857 [2024-10-01 15:59:05.079856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-10-01 15:59:05.079881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.857 [2024-10-01 15:59:05.079890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.857 [2024-10-01 15:59:05.080043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-10-01 15:59:05.080054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.857 [2024-10-01 15:59:05.080062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.857 [2024-10-01 15:59:05.080092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.857 [2024-10-01 15:59:05.080103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.857 [2024-10-01 15:59:05.080121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.857 [2024-10-01 15:59:05.080130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.857 [2024-10-01 15:59:05.080138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.857 [2024-10-01 15:59:05.080147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.857 [2024-10-01 15:59:05.080153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.857 [2024-10-01 15:59:05.080160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.857 [2024-10-01 15:59:05.080173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.857 [2024-10-01 15:59:05.080182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.857 [2024-10-01 15:59:05.089539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.857 [2024-10-01 15:59:05.089570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.857 [2024-10-01 15:59:05.089777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-10-01 15:59:05.089791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.857 [2024-10-01 15:59:05.089799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.857 [2024-10-01 15:59:05.090940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-10-01 15:59:05.090961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.857 [2024-10-01 15:59:05.090970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.857 [2024-10-01 15:59:05.090982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.857 [2024-10-01 15:59:05.091230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.857 [2024-10-01 15:59:05.091244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.857 [2024-10-01 15:59:05.091252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.857 [2024-10-01 15:59:05.091259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.857 [2024-10-01 15:59:05.091298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.857 [2024-10-01 15:59:05.091306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.857 [2024-10-01 15:59:05.091313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.857 [2024-10-01 15:59:05.091319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.857 [2024-10-01 15:59:05.091332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.857 [2024-10-01 15:59:05.101866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.857 [2024-10-01 15:59:05.101889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.857 [2024-10-01 15:59:05.102257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-10-01 15:59:05.102275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.857 [2024-10-01 15:59:05.102284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.857 [2024-10-01 15:59:05.102423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-10-01 15:59:05.102434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.857 [2024-10-01 15:59:05.102441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.857 [2024-10-01 15:59:05.102616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.857 [2024-10-01 15:59:05.102630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.857 [2024-10-01 15:59:05.102782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.857 [2024-10-01 15:59:05.102795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.857 [2024-10-01 15:59:05.102809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.857 [2024-10-01 15:59:05.102820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.857 [2024-10-01 15:59:05.102826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.857 [2024-10-01 15:59:05.102833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.857 [2024-10-01 15:59:05.102871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.857 [2024-10-01 15:59:05.102879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.857 [2024-10-01 15:59:05.112777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.857 [2024-10-01 15:59:05.112799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.857 [2024-10-01 15:59:05.113214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-10-01 15:59:05.113233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.857 [2024-10-01 15:59:05.113241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.857 [2024-10-01 15:59:05.113492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-10-01 15:59:05.113504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.857 [2024-10-01 15:59:05.113511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.857 [2024-10-01 15:59:05.113764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.857 [2024-10-01 15:59:05.113779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.857 [2024-10-01 15:59:05.113934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.857 [2024-10-01 15:59:05.113946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.857 [2024-10-01 15:59:05.113953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.857 [2024-10-01 15:59:05.113963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.857 [2024-10-01 15:59:05.113970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.857 [2024-10-01 15:59:05.113977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.857 [2024-10-01 15:59:05.114008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.857 [2024-10-01 15:59:05.114015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.857 [2024-10-01 15:59:05.123969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.857 [2024-10-01 15:59:05.123991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.857 [2024-10-01 15:59:05.124216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-10-01 15:59:05.124230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.857 [2024-10-01 15:59:05.124237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.857 [2024-10-01 15:59:05.124359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.857 [2024-10-01 15:59:05.124370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.857 [2024-10-01 15:59:05.124381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.857 [2024-10-01 15:59:05.124393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.857 [2024-10-01 15:59:05.124402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.857 [2024-10-01 15:59:05.124419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.857 [2024-10-01 15:59:05.124427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.857 [2024-10-01 15:59:05.124434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.857 [2024-10-01 15:59:05.124443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.857 [2024-10-01 15:59:05.124449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.857 [2024-10-01 15:59:05.124455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.857 [2024-10-01 15:59:05.124470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.858 [2024-10-01 15:59:05.124478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.858 [2024-10-01 15:59:05.134050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.858 [2024-10-01 15:59:05.134081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.858 [2024-10-01 15:59:05.134258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-10-01 15:59:05.134272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.858 [2024-10-01 15:59:05.134279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.858 [2024-10-01 15:59:05.134525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-10-01 15:59:05.134536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.858 [2024-10-01 15:59:05.134543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.858 [2024-10-01 15:59:05.134552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.858 [2024-10-01 15:59:05.134563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.858 [2024-10-01 15:59:05.134572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.858 [2024-10-01 15:59:05.134578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.858 [2024-10-01 15:59:05.134585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.858 [2024-10-01 15:59:05.134598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.858 [2024-10-01 15:59:05.134605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.858 [2024-10-01 15:59:05.134612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.858 [2024-10-01 15:59:05.134619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.858 [2024-10-01 15:59:05.134631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.858 [2024-10-01 15:59:05.145889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.858 [2024-10-01 15:59:05.145916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.858 [2024-10-01 15:59:05.146184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-10-01 15:59:05.146200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.858 [2024-10-01 15:59:05.146209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.858 [2024-10-01 15:59:05.146402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-10-01 15:59:05.146415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.858 [2024-10-01 15:59:05.146422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.858 [2024-10-01 15:59:05.146443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.858 [2024-10-01 15:59:05.146453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.858 [2024-10-01 15:59:05.146463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.858 [2024-10-01 15:59:05.146470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.858 [2024-10-01 15:59:05.146477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.858 [2024-10-01 15:59:05.146486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.858 [2024-10-01 15:59:05.146493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.858 [2024-10-01 15:59:05.146499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.858 [2024-10-01 15:59:05.146513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.858 [2024-10-01 15:59:05.146519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.858 [2024-10-01 15:59:05.157249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.858 [2024-10-01 15:59:05.157272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.858 [2024-10-01 15:59:05.157442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-10-01 15:59:05.157455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.858 [2024-10-01 15:59:05.157463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.858 [2024-10-01 15:59:05.157669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-10-01 15:59:05.157680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.858 [2024-10-01 15:59:05.157687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.858 [2024-10-01 15:59:05.158563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.858 [2024-10-01 15:59:05.158579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.858 [2024-10-01 15:59:05.158934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.858 [2024-10-01 15:59:05.158947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.858 [2024-10-01 15:59:05.158954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.858 [2024-10-01 15:59:05.158967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.858 [2024-10-01 15:59:05.158974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.858 [2024-10-01 15:59:05.158981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.858 [2024-10-01 15:59:05.159340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.858 [2024-10-01 15:59:05.159353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.858 [2024-10-01 15:59:05.168544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.858 [2024-10-01 15:59:05.168565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.858 [2024-10-01 15:59:05.168807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-10-01 15:59:05.168821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.858 [2024-10-01 15:59:05.168828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.858 [2024-10-01 15:59:05.168977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-10-01 15:59:05.168988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.858 [2024-10-01 15:59:05.168995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.858 [2024-10-01 15:59:05.169007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.858 [2024-10-01 15:59:05.169016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.858 [2024-10-01 15:59:05.169027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.858 [2024-10-01 15:59:05.169034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.858 [2024-10-01 15:59:05.169041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.858 [2024-10-01 15:59:05.169050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.858 [2024-10-01 15:59:05.169056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.858 [2024-10-01 15:59:05.169062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.858 [2024-10-01 15:59:05.169076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.858 [2024-10-01 15:59:05.169084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.858 [2024-10-01 15:59:05.181115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.858 [2024-10-01 15:59:05.181137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.858 [2024-10-01 15:59:05.181612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-10-01 15:59:05.181631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.858 [2024-10-01 15:59:05.181639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.858 [2024-10-01 15:59:05.181833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-10-01 15:59:05.181846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.858 [2024-10-01 15:59:05.181853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.858 [2024-10-01 15:59:05.182571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.858 [2024-10-01 15:59:05.182590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.858 [2024-10-01 15:59:05.182909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.858 [2024-10-01 15:59:05.182922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.858 [2024-10-01 15:59:05.182929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.858 [2024-10-01 15:59:05.182940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.858 [2024-10-01 15:59:05.182947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.858 [2024-10-01 15:59:05.182954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.858 [2024-10-01 15:59:05.182997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.858 [2024-10-01 15:59:05.183005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.858 [2024-10-01 15:59:05.191528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.858 [2024-10-01 15:59:05.191550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.858 [2024-10-01 15:59:05.191716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-10-01 15:59:05.191730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.858 [2024-10-01 15:59:05.191738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.858 [2024-10-01 15:59:05.191953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.858 [2024-10-01 15:59:05.191965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.858 [2024-10-01 15:59:05.191974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.858 [2024-10-01 15:59:05.192099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.858 [2024-10-01 15:59:05.192113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.858 [2024-10-01 15:59:05.192209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.858 [2024-10-01 15:59:05.192220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.858 [2024-10-01 15:59:05.192227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.858 [2024-10-01 15:59:05.192236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.858 [2024-10-01 15:59:05.192242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.858 [2024-10-01 15:59:05.192249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.858 [2024-10-01 15:59:05.192275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.858 [2024-10-01 15:59:05.192283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.858 [2024-10-01 15:59:05.202660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.859 [2024-10-01 15:59:05.202681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.859 [2024-10-01 15:59:05.202940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-10-01 15:59:05.202954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.859 [2024-10-01 15:59:05.202963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.859 [2024-10-01 15:59:05.203131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-10-01 15:59:05.203142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.859 [2024-10-01 15:59:05.203150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.859 [2024-10-01 15:59:05.203303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.859 [2024-10-01 15:59:05.203318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.859 [2024-10-01 15:59:05.203443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.859 [2024-10-01 15:59:05.203455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.859 [2024-10-01 15:59:05.203462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.859 [2024-10-01 15:59:05.203471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.859 [2024-10-01 15:59:05.203479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.859 [2024-10-01 15:59:05.203485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.859 [2024-10-01 15:59:05.203509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.859 [2024-10-01 15:59:05.203517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.859 [2024-10-01 15:59:05.213047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.859 [2024-10-01 15:59:05.213070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.859 [2024-10-01 15:59:05.213307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-10-01 15:59:05.213321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.859 [2024-10-01 15:59:05.213328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.859 [2024-10-01 15:59:05.213523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-10-01 15:59:05.213535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.859 [2024-10-01 15:59:05.213544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.859 [2024-10-01 15:59:05.213557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.859 [2024-10-01 15:59:05.213566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.859 [2024-10-01 15:59:05.213576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.859 [2024-10-01 15:59:05.213583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.859 [2024-10-01 15:59:05.213590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.859 [2024-10-01 15:59:05.213599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.859 [2024-10-01 15:59:05.213609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.859 [2024-10-01 15:59:05.213615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.859 [2024-10-01 15:59:05.213629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.859 [2024-10-01 15:59:05.213636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.859 [2024-10-01 15:59:05.223996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.859 [2024-10-01 15:59:05.224018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.859 [2024-10-01 15:59:05.224251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-10-01 15:59:05.224265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.859 [2024-10-01 15:59:05.224273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.859 [2024-10-01 15:59:05.224432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-10-01 15:59:05.224443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.859 [2024-10-01 15:59:05.224451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.859 [2024-10-01 15:59:05.224462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.859 [2024-10-01 15:59:05.224472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.859 [2024-10-01 15:59:05.224481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.859 [2024-10-01 15:59:05.224489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.859 [2024-10-01 15:59:05.224496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.859 [2024-10-01 15:59:05.224505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.859 [2024-10-01 15:59:05.224511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.859 [2024-10-01 15:59:05.224517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.859 [2024-10-01 15:59:05.224531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.859 [2024-10-01 15:59:05.224538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.859 [2024-10-01 15:59:05.234482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.859 [2024-10-01 15:59:05.234504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.859 [2024-10-01 15:59:05.234616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-10-01 15:59:05.234630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.859 [2024-10-01 15:59:05.234637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.859 [2024-10-01 15:59:05.234851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-10-01 15:59:05.234867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.859 [2024-10-01 15:59:05.234875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.859 [2024-10-01 15:59:05.234887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.859 [2024-10-01 15:59:05.234900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.859 [2024-10-01 15:59:05.234910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.859 [2024-10-01 15:59:05.234918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.859 [2024-10-01 15:59:05.234924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.859 [2024-10-01 15:59:05.234934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.859 [2024-10-01 15:59:05.234939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.859 [2024-10-01 15:59:05.234946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.859 [2024-10-01 15:59:05.234960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.859 [2024-10-01 15:59:05.234968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.859 [2024-10-01 15:59:05.246490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.859 [2024-10-01 15:59:05.246514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.859 [2024-10-01 15:59:05.246752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-10-01 15:59:05.246765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.859 [2024-10-01 15:59:05.246773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.859 [2024-10-01 15:59:05.246929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-10-01 15:59:05.246940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.859 [2024-10-01 15:59:05.246948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.859 [2024-10-01 15:59:05.247768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.859 [2024-10-01 15:59:05.247784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.859 [2024-10-01 15:59:05.248405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.859 [2024-10-01 15:59:05.248419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.859 [2024-10-01 15:59:05.248426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.859 [2024-10-01 15:59:05.248437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.859 [2024-10-01 15:59:05.248444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.859 [2024-10-01 15:59:05.248451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.859 [2024-10-01 15:59:05.248734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.859 [2024-10-01 15:59:05.248745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.859 [2024-10-01 15:59:05.258349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.859 [2024-10-01 15:59:05.258373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.859 [2024-10-01 15:59:05.258725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-10-01 15:59:05.258743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.859 [2024-10-01 15:59:05.258755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.859 [2024-10-01 15:59:05.258843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-10-01 15:59:05.258854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.859 [2024-10-01 15:59:05.258861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.859 [2024-10-01 15:59:05.259071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.859 [2024-10-01 15:59:05.259085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.859 [2024-10-01 15:59:05.259228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.859 [2024-10-01 15:59:05.259240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.859 [2024-10-01 15:59:05.259247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.859 [2024-10-01 15:59:05.259257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.859 [2024-10-01 15:59:05.259264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.859 [2024-10-01 15:59:05.259271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.859 [2024-10-01 15:59:05.259299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.859 [2024-10-01 15:59:05.259306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.859 [2024-10-01 15:59:05.269886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.859 [2024-10-01 15:59:05.269908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.859 [2024-10-01 15:59:05.270151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.859 [2024-10-01 15:59:05.270166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.860 [2024-10-01 15:59:05.270174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.860 [2024-10-01 15:59:05.270369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-10-01 15:59:05.270383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.860 [2024-10-01 15:59:05.270390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.860 [2024-10-01 15:59:05.270534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.860 [2024-10-01 15:59:05.270547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.860 [2024-10-01 15:59:05.270686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.860 [2024-10-01 15:59:05.270697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.860 [2024-10-01 15:59:05.270704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.860 [2024-10-01 15:59:05.270715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.860 [2024-10-01 15:59:05.270721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.860 [2024-10-01 15:59:05.270732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.860 [2024-10-01 15:59:05.270763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.860 [2024-10-01 15:59:05.270771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.860 [2024-10-01 15:59:05.282005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.860 [2024-10-01 15:59:05.282027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.860 [2024-10-01 15:59:05.282218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-10-01 15:59:05.282232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.860 [2024-10-01 15:59:05.282240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.860 [2024-10-01 15:59:05.282431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-10-01 15:59:05.282442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.860 [2024-10-01 15:59:05.282450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.860 [2024-10-01 15:59:05.282462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.860 [2024-10-01 15:59:05.282471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.860 [2024-10-01 15:59:05.282489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.860 [2024-10-01 15:59:05.282497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.860 [2024-10-01 15:59:05.282504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.860 [2024-10-01 15:59:05.282513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.860 [2024-10-01 15:59:05.282519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.860 [2024-10-01 15:59:05.282525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.860 [2024-10-01 15:59:05.282538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.860 [2024-10-01 15:59:05.282546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.860 [2024-10-01 15:59:05.293059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.860 [2024-10-01 15:59:05.293080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.860 [2024-10-01 15:59:05.293271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-10-01 15:59:05.293285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.860 [2024-10-01 15:59:05.293293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.860 [2024-10-01 15:59:05.293437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-10-01 15:59:05.293448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.860 [2024-10-01 15:59:05.293456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.860 [2024-10-01 15:59:05.293469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.860 [2024-10-01 15:59:05.293478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.860 [2024-10-01 15:59:05.293492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.860 [2024-10-01 15:59:05.293498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.860 [2024-10-01 15:59:05.293507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.860 [2024-10-01 15:59:05.293516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.860 [2024-10-01 15:59:05.293523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.860 [2024-10-01 15:59:05.293530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.860 [2024-10-01 15:59:05.293544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.860 [2024-10-01 15:59:05.293551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.860 [2024-10-01 15:59:05.304286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.860 [2024-10-01 15:59:05.304308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.860 [2024-10-01 15:59:05.304473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-10-01 15:59:05.304487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.860 [2024-10-01 15:59:05.304494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.860 [2024-10-01 15:59:05.304689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-10-01 15:59:05.304700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.860 [2024-10-01 15:59:05.304707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.860 [2024-10-01 15:59:05.304719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.860 [2024-10-01 15:59:05.304728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.860 [2024-10-01 15:59:05.304747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.860 [2024-10-01 15:59:05.304755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.860 [2024-10-01 15:59:05.304762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.860 [2024-10-01 15:59:05.304771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.860 [2024-10-01 15:59:05.304777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.860 [2024-10-01 15:59:05.304783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.860 [2024-10-01 15:59:05.304797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.860 [2024-10-01 15:59:05.304805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.860 [2024-10-01 15:59:05.314784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.860 [2024-10-01 15:59:05.314806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.860 [2024-10-01 15:59:05.314964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-10-01 15:59:05.314978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.860 [2024-10-01 15:59:05.314987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.860 [2024-10-01 15:59:05.315148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-10-01 15:59:05.315159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.860 [2024-10-01 15:59:05.315167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.860 [2024-10-01 15:59:05.315179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.860 [2024-10-01 15:59:05.315189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.860 [2024-10-01 15:59:05.315199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.860 [2024-10-01 15:59:05.315205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.860 [2024-10-01 15:59:05.315211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.860 [2024-10-01 15:59:05.315220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.860 [2024-10-01 15:59:05.315227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.860 [2024-10-01 15:59:05.315234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.860 [2024-10-01 15:59:05.315248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.860 [2024-10-01 15:59:05.315254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.860 [2024-10-01 15:59:05.325222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.860 [2024-10-01 15:59:05.325244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.860 [2024-10-01 15:59:05.325409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.860 [2024-10-01 15:59:05.325423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.860 [2024-10-01 15:59:05.325430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.861 [2024-10-01 15:59:05.325570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-10-01 15:59:05.325581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.861 [2024-10-01 15:59:05.325588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.861 [2024-10-01 15:59:05.325600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.861 [2024-10-01 15:59:05.325609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.861 [2024-10-01 15:59:05.325619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.861 [2024-10-01 15:59:05.325626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.861 [2024-10-01 15:59:05.325633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.861 [2024-10-01 15:59:05.325642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.861 [2024-10-01 15:59:05.325648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.861 [2024-10-01 15:59:05.325654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.861 [2024-10-01 15:59:05.325668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.861 [2024-10-01 15:59:05.325678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.861 [2024-10-01 15:59:05.335806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.861 [2024-10-01 15:59:05.335827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.861 [2024-10-01 15:59:05.336016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-10-01 15:59:05.336030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.861 [2024-10-01 15:59:05.336038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.861 [2024-10-01 15:59:05.336159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-10-01 15:59:05.336171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.861 [2024-10-01 15:59:05.336178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.861 [2024-10-01 15:59:05.336190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.861 [2024-10-01 15:59:05.336200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.861 [2024-10-01 15:59:05.336210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.861 [2024-10-01 15:59:05.336218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.861 [2024-10-01 15:59:05.336224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.861 [2024-10-01 15:59:05.336233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.861 [2024-10-01 15:59:05.336239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.861 [2024-10-01 15:59:05.336246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.861 [2024-10-01 15:59:05.336259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.861 [2024-10-01 15:59:05.336266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.861 [2024-10-01 15:59:05.347843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.861 [2024-10-01 15:59:05.348046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.861 [2024-10-01 15:59:05.348217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-10-01 15:59:05.348233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.861 [2024-10-01 15:59:05.348241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.861 [2024-10-01 15:59:05.348386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-10-01 15:59:05.348396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.861 [2024-10-01 15:59:05.348403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.861 [2024-10-01 15:59:05.349068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.861 [2024-10-01 15:59:05.349086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.861 [2024-10-01 15:59:05.349221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.861 [2024-10-01 15:59:05.349235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.861 [2024-10-01 15:59:05.349242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.861 [2024-10-01 15:59:05.349252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.861 [2024-10-01 15:59:05.349259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.861 [2024-10-01 15:59:05.349267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.861 [2024-10-01 15:59:05.350092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.861 [2024-10-01 15:59:05.350108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.861 [2024-10-01 15:59:05.358306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.861 [2024-10-01 15:59:05.358328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.861 [2024-10-01 15:59:05.358542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-10-01 15:59:05.358556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.861 [2024-10-01 15:59:05.358564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.861 [2024-10-01 15:59:05.358705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-10-01 15:59:05.358715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.861 [2024-10-01 15:59:05.358722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.861 [2024-10-01 15:59:05.358852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.861 [2024-10-01 15:59:05.358872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.861 [2024-10-01 15:59:05.358900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.861 [2024-10-01 15:59:05.358909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.861 [2024-10-01 15:59:05.358915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.861 [2024-10-01 15:59:05.358924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.861 [2024-10-01 15:59:05.358931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.861 [2024-10-01 15:59:05.358939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.861 [2024-10-01 15:59:05.358953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.861 [2024-10-01 15:59:05.358960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.861 [2024-10-01 15:59:05.369776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.861 [2024-10-01 15:59:05.369797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.861 [2024-10-01 15:59:05.370014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-10-01 15:59:05.370030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.861 [2024-10-01 15:59:05.370038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.861 [2024-10-01 15:59:05.370181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-10-01 15:59:05.370195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.861 [2024-10-01 15:59:05.370203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.861 [2024-10-01 15:59:05.370216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.861 [2024-10-01 15:59:05.370225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.861 [2024-10-01 15:59:05.370235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.861 [2024-10-01 15:59:05.370242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.861 [2024-10-01 15:59:05.370248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.861 [2024-10-01 15:59:05.370258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.861 [2024-10-01 15:59:05.370265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.861 [2024-10-01 15:59:05.370271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.861 [2024-10-01 15:59:05.370285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.861 [2024-10-01 15:59:05.370291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.861 [2024-10-01 15:59:05.380546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.861 [2024-10-01 15:59:05.380568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.861 [2024-10-01 15:59:05.380819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-10-01 15:59:05.380833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.861 [2024-10-01 15:59:05.380841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.861 [2024-10-01 15:59:05.381061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-10-01 15:59:05.381074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.861 [2024-10-01 15:59:05.381081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.861 [2024-10-01 15:59:05.381321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.861 [2024-10-01 15:59:05.381336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.861 [2024-10-01 15:59:05.381373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.861 [2024-10-01 15:59:05.381381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.861 [2024-10-01 15:59:05.381388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.861 [2024-10-01 15:59:05.381398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.861 [2024-10-01 15:59:05.381404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.861 [2024-10-01 15:59:05.381411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.861 [2024-10-01 15:59:05.381539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.861 [2024-10-01 15:59:05.381550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.861 [2024-10-01 15:59:05.391353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.861 [2024-10-01 15:59:05.391375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.861 [2024-10-01 15:59:05.391497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-10-01 15:59:05.391510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.861 [2024-10-01 15:59:05.391518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.861 [2024-10-01 15:59:05.391656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.861 [2024-10-01 15:59:05.391667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.861 [2024-10-01 15:59:05.391674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.861 [2024-10-01 15:59:05.391686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.861 [2024-10-01 15:59:05.391695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.862 [2024-10-01 15:59:05.391706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.862 [2024-10-01 15:59:05.391713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.862 [2024-10-01 15:59:05.391720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.862 [2024-10-01 15:59:05.391731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.862 [2024-10-01 15:59:05.391737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.862 [2024-10-01 15:59:05.391743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.862 [2024-10-01 15:59:05.391756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.862 [2024-10-01 15:59:05.391763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.862 [2024-10-01 15:59:05.402592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.862 [2024-10-01 15:59:05.402615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.862 [2024-10-01 15:59:05.402833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-10-01 15:59:05.402847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.862 [2024-10-01 15:59:05.402855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.862 [2024-10-01 15:59:05.403078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-10-01 15:59:05.403091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.862 [2024-10-01 15:59:05.403098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.862 [2024-10-01 15:59:05.403110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.862 [2024-10-01 15:59:05.403119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.862 [2024-10-01 15:59:05.403130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.862 [2024-10-01 15:59:05.403136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.862 [2024-10-01 15:59:05.403146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.862 [2024-10-01 15:59:05.403154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.862 [2024-10-01 15:59:05.403161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.862 [2024-10-01 15:59:05.403167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.862 [2024-10-01 15:59:05.403181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.862 [2024-10-01 15:59:05.403188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.862 [2024-10-01 15:59:05.413740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.862 [2024-10-01 15:59:05.413763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.862 [2024-10-01 15:59:05.414105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-10-01 15:59:05.414123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.862 [2024-10-01 15:59:05.414131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.862 [2024-10-01 15:59:05.414328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-10-01 15:59:05.414340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.862 [2024-10-01 15:59:05.414348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.862 [2024-10-01 15:59:05.414501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.862 [2024-10-01 15:59:05.414514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.862 [2024-10-01 15:59:05.414652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.862 [2024-10-01 15:59:05.414664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.862 [2024-10-01 15:59:05.414671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.862 [2024-10-01 15:59:05.414682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.862 [2024-10-01 15:59:05.414689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.862 [2024-10-01 15:59:05.414695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.862 [2024-10-01 15:59:05.414725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.862 [2024-10-01 15:59:05.414732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.862 [2024-10-01 15:59:05.424349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.862 [2024-10-01 15:59:05.424371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.862 [2024-10-01 15:59:05.424528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-10-01 15:59:05.424541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.862 [2024-10-01 15:59:05.424550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.862 [2024-10-01 15:59:05.424699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-10-01 15:59:05.424710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.862 [2024-10-01 15:59:05.424721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.862 [2024-10-01 15:59:05.424733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.862 [2024-10-01 15:59:05.424742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.862 [2024-10-01 15:59:05.424752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.862 [2024-10-01 15:59:05.424759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.862 [2024-10-01 15:59:05.424766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.862 [2024-10-01 15:59:05.424774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.862 [2024-10-01 15:59:05.424781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.862 [2024-10-01 15:59:05.424787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.862 [2024-10-01 15:59:05.424801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.862 [2024-10-01 15:59:05.424808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.862 [2024-10-01 15:59:05.435885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.862 [2024-10-01 15:59:05.435908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.862 [2024-10-01 15:59:05.436115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-10-01 15:59:05.436129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.862 [2024-10-01 15:59:05.436137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.862 [2024-10-01 15:59:05.436269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-10-01 15:59:05.436280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.862 [2024-10-01 15:59:05.436287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.862 [2024-10-01 15:59:05.436300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.862 [2024-10-01 15:59:05.436310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.862 [2024-10-01 15:59:05.436327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.862 [2024-10-01 15:59:05.436334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.862 [2024-10-01 15:59:05.436340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.862 [2024-10-01 15:59:05.436350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.862 [2024-10-01 15:59:05.436357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.862 [2024-10-01 15:59:05.436364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.862 [2024-10-01 15:59:05.436480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.862 [2024-10-01 15:59:05.436491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.862 [2024-10-01 15:59:05.446743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.862 [2024-10-01 15:59:05.446768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.862 [2024-10-01 15:59:05.446989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-10-01 15:59:05.447003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.862 [2024-10-01 15:59:05.447011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.862 [2024-10-01 15:59:05.447155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-10-01 15:59:05.447166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.862 [2024-10-01 15:59:05.447174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.862 [2024-10-01 15:59:05.447185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.862 [2024-10-01 15:59:05.447194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.862 [2024-10-01 15:59:05.447204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.862 [2024-10-01 15:59:05.447211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.862 [2024-10-01 15:59:05.447218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.862 [2024-10-01 15:59:05.447227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.862 [2024-10-01 15:59:05.447233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.862 [2024-10-01 15:59:05.447240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.862 [2024-10-01 15:59:05.447253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.862 [2024-10-01 15:59:05.447260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.862 [2024-10-01 15:59:05.458766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.862 [2024-10-01 15:59:05.458788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.862 [2024-10-01 15:59:05.458904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-10-01 15:59:05.458918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.862 [2024-10-01 15:59:05.458926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.862 [2024-10-01 15:59:05.459155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.862 [2024-10-01 15:59:05.459167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.862 [2024-10-01 15:59:05.459174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.862 [2024-10-01 15:59:05.459381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.862 [2024-10-01 15:59:05.459397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.863 [2024-10-01 15:59:05.459567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.863 [2024-10-01 15:59:05.459579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.863 [2024-10-01 15:59:05.459585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.863 [2024-10-01 15:59:05.459599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.863 [2024-10-01 15:59:05.459606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.863 [2024-10-01 15:59:05.459613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.863 [2024-10-01 15:59:05.459640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.863 [2024-10-01 15:59:05.459647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.863 [2024-10-01 15:59:05.468846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.863 [2024-10-01 15:59:05.468882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.863 [2024-10-01 15:59:05.469113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-10-01 15:59:05.469128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.863 [2024-10-01 15:59:05.469137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.863 [2024-10-01 15:59:05.469338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-10-01 15:59:05.469351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.863 [2024-10-01 15:59:05.469358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.863 [2024-10-01 15:59:05.469367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.863 [2024-10-01 15:59:05.469380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.863 [2024-10-01 15:59:05.469388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.863 [2024-10-01 15:59:05.469394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.863 [2024-10-01 15:59:05.469401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.863 [2024-10-01 15:59:05.469414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.863 [2024-10-01 15:59:05.469421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.863 [2024-10-01 15:59:05.469429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.863 [2024-10-01 15:59:05.469438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.863 [2024-10-01 15:59:05.469451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.863 [2024-10-01 15:59:05.479722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.863 [2024-10-01 15:59:05.479745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.863 [2024-10-01 15:59:05.479966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-10-01 15:59:05.479982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.863 [2024-10-01 15:59:05.479990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.863 [2024-10-01 15:59:05.480167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-10-01 15:59:05.480177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.863 [2024-10-01 15:59:05.480184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.863 [2024-10-01 15:59:05.480318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.863 [2024-10-01 15:59:05.480331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.863 [2024-10-01 15:59:05.480359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.863 [2024-10-01 15:59:05.480366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.863 [2024-10-01 15:59:05.480374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.863 [2024-10-01 15:59:05.480383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.863 [2024-10-01 15:59:05.480389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.863 [2024-10-01 15:59:05.480395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.863 [2024-10-01 15:59:05.480409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.863 [2024-10-01 15:59:05.480416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.863 [2024-10-01 15:59:05.491127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.863 [2024-10-01 15:59:05.491149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.863 [2024-10-01 15:59:05.491442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-10-01 15:59:05.491459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.863 [2024-10-01 15:59:05.491467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.863 [2024-10-01 15:59:05.491664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-10-01 15:59:05.491676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.863 [2024-10-01 15:59:05.491684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.863 [2024-10-01 15:59:05.491713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.863 [2024-10-01 15:59:05.491724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.863 [2024-10-01 15:59:05.491734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.863 [2024-10-01 15:59:05.491740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.863 [2024-10-01 15:59:05.491747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.863 [2024-10-01 15:59:05.491757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.863 [2024-10-01 15:59:05.491764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.863 [2024-10-01 15:59:05.491770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.863 [2024-10-01 15:59:05.491785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.863 [2024-10-01 15:59:05.491791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.863 [2024-10-01 15:59:05.501788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.863 [2024-10-01 15:59:05.501810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.863 [2024-10-01 15:59:05.502010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-10-01 15:59:05.502023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.863 [2024-10-01 15:59:05.502033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.863 [2024-10-01 15:59:05.502249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-10-01 15:59:05.502259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.863 [2024-10-01 15:59:05.502266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.863 [2024-10-01 15:59:05.502460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.863 [2024-10-01 15:59:05.502474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.863 [2024-10-01 15:59:05.502567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.863 [2024-10-01 15:59:05.502577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.863 [2024-10-01 15:59:05.502584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.863 [2024-10-01 15:59:05.502593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.863 [2024-10-01 15:59:05.502599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.863 [2024-10-01 15:59:05.502606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.863 [2024-10-01 15:59:05.502626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.863 [2024-10-01 15:59:05.502635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.863 [2024-10-01 15:59:05.512076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.863 [2024-10-01 15:59:05.512098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.863 [2024-10-01 15:59:05.512307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-10-01 15:59:05.512321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.863 [2024-10-01 15:59:05.512328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.863 [2024-10-01 15:59:05.512469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-10-01 15:59:05.512479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.863 [2024-10-01 15:59:05.512487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.863 [2024-10-01 15:59:05.512500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.863 [2024-10-01 15:59:05.512510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.863 [2024-10-01 15:59:05.512520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.863 [2024-10-01 15:59:05.512527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.863 [2024-10-01 15:59:05.512533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.863 [2024-10-01 15:59:05.512542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.863 [2024-10-01 15:59:05.512552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.863 [2024-10-01 15:59:05.512559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.863 [2024-10-01 15:59:05.512573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.863 [2024-10-01 15:59:05.512580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.863 [2024-10-01 15:59:05.522766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.863 [2024-10-01 15:59:05.522788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.863 [2024-10-01 15:59:05.522984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-10-01 15:59:05.522998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.863 [2024-10-01 15:59:05.523005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.863 [2024-10-01 15:59:05.523149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.863 [2024-10-01 15:59:05.523160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.863 [2024-10-01 15:59:05.523167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.863 [2024-10-01 15:59:05.523407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.863 [2024-10-01 15:59:05.523422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.863 [2024-10-01 15:59:05.523458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.863 [2024-10-01 15:59:05.523467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.863 [2024-10-01 15:59:05.523474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.863 [2024-10-01 15:59:05.523483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.863 [2024-10-01 15:59:05.523489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.863 [2024-10-01 15:59:05.523496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.863 [2024-10-01 15:59:05.523510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.863 [2024-10-01 15:59:05.523517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.864 [2024-10-01 15:59:05.534749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.864 [2024-10-01 15:59:05.534771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.864 [2024-10-01 15:59:05.535064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-10-01 15:59:05.535082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.864 [2024-10-01 15:59:05.535091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.864 [2024-10-01 15:59:05.535233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-10-01 15:59:05.535244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.864 [2024-10-01 15:59:05.535251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.864 [2024-10-01 15:59:05.535455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.864 [2024-10-01 15:59:05.535473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.864 [2024-10-01 15:59:05.535505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.864 [2024-10-01 15:59:05.535514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.864 [2024-10-01 15:59:05.535521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.864 [2024-10-01 15:59:05.535530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.864 [2024-10-01 15:59:05.535536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.864 [2024-10-01 15:59:05.535542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.864 [2024-10-01 15:59:05.535671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.864 [2024-10-01 15:59:05.535681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.864 [2024-10-01 15:59:05.544931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.864 [2024-10-01 15:59:05.544954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.864 [2024-10-01 15:59:05.545114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-10-01 15:59:05.545128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.864 [2024-10-01 15:59:05.545136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.864 [2024-10-01 15:59:05.545262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-10-01 15:59:05.545272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.864 [2024-10-01 15:59:05.545279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.864 [2024-10-01 15:59:05.545291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.864 [2024-10-01 15:59:05.545300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.864 [2024-10-01 15:59:05.545311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.864 [2024-10-01 15:59:05.545317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.864 [2024-10-01 15:59:05.545324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.864 [2024-10-01 15:59:05.545333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.864 [2024-10-01 15:59:05.545339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.864 [2024-10-01 15:59:05.545345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.864 [2024-10-01 15:59:05.545359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.864 [2024-10-01 15:59:05.545367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.864 [2024-10-01 15:59:05.557250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.864 [2024-10-01 15:59:05.557271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.864 [2024-10-01 15:59:05.557484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-10-01 15:59:05.557497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.864 [2024-10-01 15:59:05.557513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.864 [2024-10-01 15:59:05.557707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-10-01 15:59:05.557719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.864 [2024-10-01 15:59:05.557726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.864 [2024-10-01 15:59:05.558188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.864 [2024-10-01 15:59:05.558204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.864 [2024-10-01 15:59:05.558479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.864 [2024-10-01 15:59:05.558492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.864 [2024-10-01 15:59:05.558499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.864 [2024-10-01 15:59:05.558508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.864 [2024-10-01 15:59:05.558515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.864 [2024-10-01 15:59:05.558521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.864 [2024-10-01 15:59:05.558674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.864 [2024-10-01 15:59:05.558685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.864 [2024-10-01 15:59:05.567503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.864 [2024-10-01 15:59:05.567524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.864 [2024-10-01 15:59:05.567767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-10-01 15:59:05.567781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.864 [2024-10-01 15:59:05.567789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.864 [2024-10-01 15:59:05.567946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-10-01 15:59:05.567957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.864 [2024-10-01 15:59:05.567965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.864 [2024-10-01 15:59:05.567977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.864 [2024-10-01 15:59:05.567988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.864 [2024-10-01 15:59:05.567998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.864 [2024-10-01 15:59:05.568004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.864 [2024-10-01 15:59:05.568012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.864 [2024-10-01 15:59:05.568020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.864 [2024-10-01 15:59:05.568026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.864 [2024-10-01 15:59:05.568037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.864 [2024-10-01 15:59:05.568050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.864 [2024-10-01 15:59:05.568057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.864 [2024-10-01 15:59:05.579825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.864 [2024-10-01 15:59:05.579847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.864 [2024-10-01 15:59:05.579955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-10-01 15:59:05.579968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.864 [2024-10-01 15:59:05.579976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.864 [2024-10-01 15:59:05.580192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-10-01 15:59:05.580203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.864 [2024-10-01 15:59:05.580210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.864 [2024-10-01 15:59:05.580664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.864 [2024-10-01 15:59:05.580680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.864 [2024-10-01 15:59:05.580961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.864 [2024-10-01 15:59:05.580973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.864 [2024-10-01 15:59:05.580981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.864 [2024-10-01 15:59:05.580990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.864 [2024-10-01 15:59:05.580997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.864 [2024-10-01 15:59:05.581004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.864 [2024-10-01 15:59:05.581158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.864 [2024-10-01 15:59:05.581168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.864 [2024-10-01 15:59:05.590849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.864 [2024-10-01 15:59:05.590877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.864 [2024-10-01 15:59:05.591137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-10-01 15:59:05.591151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.864 [2024-10-01 15:59:05.591159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.864 [2024-10-01 15:59:05.591301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.864 [2024-10-01 15:59:05.591311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.864 [2024-10-01 15:59:05.591318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.864 [2024-10-01 15:59:05.591773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.864 [2024-10-01 15:59:05.591788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.864 [2024-10-01 15:59:05.591965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.864 [2024-10-01 15:59:05.591979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.864 [2024-10-01 15:59:05.591986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.864 [2024-10-01 15:59:05.591996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.864 [2024-10-01 15:59:05.592002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-10-01 15:59:05.592009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-10-01 15:59:05.592152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-10-01 15:59:05.592163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-10-01 15:59:05.601753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-10-01 15:59:05.601774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-10-01 15:59:05.601888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-10-01 15:59:05.601901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.865 [2024-10-01 15:59:05.601909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.865 [2024-10-01 15:59:05.602129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-10-01 15:59:05.602141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.865 [2024-10-01 15:59:05.602149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.865 [2024-10-01 15:59:05.602160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.865 [2024-10-01 15:59:05.602170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.865 [2024-10-01 15:59:05.602181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-10-01 15:59:05.602188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-10-01 15:59:05.602195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-10-01 15:59:05.602204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-10-01 15:59:05.602210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-10-01 15:59:05.602217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-10-01 15:59:05.602230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-10-01 15:59:05.602237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-10-01 15:59:05.614184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-10-01 15:59:05.614208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-10-01 15:59:05.614514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-10-01 15:59:05.614531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.865 [2024-10-01 15:59:05.614539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.865 [2024-10-01 15:59:05.614737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-10-01 15:59:05.614749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.865 [2024-10-01 15:59:05.614756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.865 [2024-10-01 15:59:05.615001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.865 [2024-10-01 15:59:05.615018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.865 [2024-10-01 15:59:05.615126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-10-01 15:59:05.615136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-10-01 15:59:05.615142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-10-01 15:59:05.615153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-10-01 15:59:05.615160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-10-01 15:59:05.615166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-10-01 15:59:05.615188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-10-01 15:59:05.615196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-10-01 15:59:05.624968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-10-01 15:59:05.624990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-10-01 15:59:05.625139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-10-01 15:59:05.625152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.865 [2024-10-01 15:59:05.625160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.865 [2024-10-01 15:59:05.625305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-10-01 15:59:05.625317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.865 [2024-10-01 15:59:05.625324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.865 [2024-10-01 15:59:05.625336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.865 [2024-10-01 15:59:05.625346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.865 [2024-10-01 15:59:05.625356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-10-01 15:59:05.625363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-10-01 15:59:05.625369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-10-01 15:59:05.625378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-10-01 15:59:05.625384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-10-01 15:59:05.625391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-10-01 15:59:05.625843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-10-01 15:59:05.625854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-10-01 15:59:05.635861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-10-01 15:59:05.635886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-10-01 15:59:05.636008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-10-01 15:59:05.636020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.865 [2024-10-01 15:59:05.636028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.865 [2024-10-01 15:59:05.636223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-10-01 15:59:05.636235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.865 [2024-10-01 15:59:05.636243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.865 [2024-10-01 15:59:05.636255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.865 [2024-10-01 15:59:05.636264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.865 [2024-10-01 15:59:05.636275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-10-01 15:59:05.636282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-10-01 15:59:05.636289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-10-01 15:59:05.636298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-10-01 15:59:05.636305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-10-01 15:59:05.636311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-10-01 15:59:05.636325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-10-01 15:59:05.636332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-10-01 15:59:05.647849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-10-01 15:59:05.647877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-10-01 15:59:05.648272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-10-01 15:59:05.648290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.865 [2024-10-01 15:59:05.648298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.865 [2024-10-01 15:59:05.648444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-10-01 15:59:05.648455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.865 [2024-10-01 15:59:05.648463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.865 [2024-10-01 15:59:05.648568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.865 [2024-10-01 15:59:05.648581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.865 [2024-10-01 15:59:05.648856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-10-01 15:59:05.648878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-10-01 15:59:05.648885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-10-01 15:59:05.648895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-10-01 15:59:05.648902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-10-01 15:59:05.648908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-10-01 15:59:05.648949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-10-01 15:59:05.648958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-10-01 15:59:05.658522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-10-01 15:59:05.658545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-10-01 15:59:05.658710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-10-01 15:59:05.658724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.865 [2024-10-01 15:59:05.658731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.865 [2024-10-01 15:59:05.658875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-10-01 15:59:05.658887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.865 [2024-10-01 15:59:05.658896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.865 [2024-10-01 15:59:05.659029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.865 [2024-10-01 15:59:05.659042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.865 [2024-10-01 15:59:05.659069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-10-01 15:59:05.659076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-10-01 15:59:05.659083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-10-01 15:59:05.659093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.865 [2024-10-01 15:59:05.659099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.865 [2024-10-01 15:59:05.659105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.865 [2024-10-01 15:59:05.659119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-10-01 15:59:05.659126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.865 [2024-10-01 15:59:05.668955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-10-01 15:59:05.668977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.865 [2024-10-01 15:59:05.669218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-10-01 15:59:05.669233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.865 [2024-10-01 15:59:05.669241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.865 [2024-10-01 15:59:05.669375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.865 [2024-10-01 15:59:05.669385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.866 [2024-10-01 15:59:05.669392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.866 [2024-10-01 15:59:05.669523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.866 [2024-10-01 15:59:05.669536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.866 [2024-10-01 15:59:05.669562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-10-01 15:59:05.669569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-10-01 15:59:05.669577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-10-01 15:59:05.669586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-10-01 15:59:05.669592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-10-01 15:59:05.669599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-10-01 15:59:05.669612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-10-01 15:59:05.669619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-10-01 15:59:05.680118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-10-01 15:59:05.680139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-10-01 15:59:05.680350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-10-01 15:59:05.680363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.866 [2024-10-01 15:59:05.680372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.866 [2024-10-01 15:59:05.680589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-10-01 15:59:05.680600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.866 [2024-10-01 15:59:05.680607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.866 [2024-10-01 15:59:05.680620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.866 [2024-10-01 15:59:05.680630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.866 [2024-10-01 15:59:05.680641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-10-01 15:59:05.680647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-10-01 15:59:05.680654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-10-01 15:59:05.680662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-10-01 15:59:05.680669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-10-01 15:59:05.680676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-10-01 15:59:05.680818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-10-01 15:59:05.680829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-10-01 15:59:05.690496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-10-01 15:59:05.690517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-10-01 15:59:05.690729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-10-01 15:59:05.690743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.866 [2024-10-01 15:59:05.690750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.866 [2024-10-01 15:59:05.690879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-10-01 15:59:05.690890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.866 [2024-10-01 15:59:05.690898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.866 [2024-10-01 15:59:05.690910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.866 [2024-10-01 15:59:05.690919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.866 [2024-10-01 15:59:05.690929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-10-01 15:59:05.690937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-10-01 15:59:05.690945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-10-01 15:59:05.690954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-10-01 15:59:05.690960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-10-01 15:59:05.690966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-10-01 15:59:05.690980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-10-01 15:59:05.690987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-10-01 15:59:05.700577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-10-01 15:59:05.700607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-10-01 15:59:05.700724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-10-01 15:59:05.700737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.866 [2024-10-01 15:59:05.700745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.866 [2024-10-01 15:59:05.700891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-10-01 15:59:05.700903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.866 [2024-10-01 15:59:05.700910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.866 [2024-10-01 15:59:05.700919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.866 [2024-10-01 15:59:05.700931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.866 [2024-10-01 15:59:05.700939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-10-01 15:59:05.700946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-10-01 15:59:05.700956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-10-01 15:59:05.700969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-10-01 15:59:05.700976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-10-01 15:59:05.700982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-10-01 15:59:05.700987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-10-01 15:59:05.701000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-10-01 15:59:05.711824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-10-01 15:59:05.711846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-10-01 15:59:05.712000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-10-01 15:59:05.712014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.866 [2024-10-01 15:59:05.712022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.866 [2024-10-01 15:59:05.712172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-10-01 15:59:05.712183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.866 [2024-10-01 15:59:05.712191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.866 [2024-10-01 15:59:05.712468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.866 [2024-10-01 15:59:05.712484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.866 [2024-10-01 15:59:05.712634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-10-01 15:59:05.712646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-10-01 15:59:05.712653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-10-01 15:59:05.712662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-10-01 15:59:05.712669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-10-01 15:59:05.712675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-10-01 15:59:05.712817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-10-01 15:59:05.712828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-10-01 15:59:05.722444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-10-01 15:59:05.722466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-10-01 15:59:05.722710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-10-01 15:59:05.722724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.866 [2024-10-01 15:59:05.722732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.866 [2024-10-01 15:59:05.722900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-10-01 15:59:05.722911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.866 [2024-10-01 15:59:05.722922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.866 [2024-10-01 15:59:05.722935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.866 [2024-10-01 15:59:05.722944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.866 [2024-10-01 15:59:05.722954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-10-01 15:59:05.722961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-10-01 15:59:05.722967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-10-01 15:59:05.722981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-10-01 15:59:05.722987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-10-01 15:59:05.722994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.866 [2024-10-01 15:59:05.723008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-10-01 15:59:05.723014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.866 [2024-10-01 15:59:05.733916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-10-01 15:59:05.733938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.866 [2024-10-01 15:59:05.734144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-10-01 15:59:05.734157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.866 [2024-10-01 15:59:05.734165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.866 [2024-10-01 15:59:05.734325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.866 [2024-10-01 15:59:05.734335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.866 [2024-10-01 15:59:05.734343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.866 [2024-10-01 15:59:05.734354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.866 [2024-10-01 15:59:05.734363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.866 [2024-10-01 15:59:05.734374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.866 [2024-10-01 15:59:05.734381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.866 [2024-10-01 15:59:05.734388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-10-01 15:59:05.734396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-10-01 15:59:05.734402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-10-01 15:59:05.734408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-10-01 15:59:05.734422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-10-01 15:59:05.734429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-10-01 15:59:05.745410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-10-01 15:59:05.745435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-10-01 15:59:05.745647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-10-01 15:59:05.745660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.867 [2024-10-01 15:59:05.745668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.867 [2024-10-01 15:59:05.745858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-10-01 15:59:05.745875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.867 [2024-10-01 15:59:05.745883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.867 [2024-10-01 15:59:05.745895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.867 [2024-10-01 15:59:05.745905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.867 [2024-10-01 15:59:05.745915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-10-01 15:59:05.745921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-10-01 15:59:05.745928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-10-01 15:59:05.745936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-10-01 15:59:05.745943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-10-01 15:59:05.745950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-10-01 15:59:05.745963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-10-01 15:59:05.745970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-10-01 15:59:05.758774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-10-01 15:59:05.758797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-10-01 15:59:05.759135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-10-01 15:59:05.759152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.867 [2024-10-01 15:59:05.759160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.867 [2024-10-01 15:59:05.759353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-10-01 15:59:05.759365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.867 [2024-10-01 15:59:05.759372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.867 [2024-10-01 15:59:05.759622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.867 [2024-10-01 15:59:05.759637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.867 [2024-10-01 15:59:05.759796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-10-01 15:59:05.759808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-10-01 15:59:05.759815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-10-01 15:59:05.759830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-10-01 15:59:05.759836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-10-01 15:59:05.759843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-10-01 15:59:05.759877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-10-01 15:59:05.759885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-10-01 15:59:05.769724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-10-01 15:59:05.769746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-10-01 15:59:05.769975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-10-01 15:59:05.769991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.867 [2024-10-01 15:59:05.769998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.867 [2024-10-01 15:59:05.770193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-10-01 15:59:05.770205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.867 [2024-10-01 15:59:05.770212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.867 [2024-10-01 15:59:05.770452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.867 [2024-10-01 15:59:05.770468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.867 [2024-10-01 15:59:05.770505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-10-01 15:59:05.770514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-10-01 15:59:05.770520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-10-01 15:59:05.770529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-10-01 15:59:05.770535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-10-01 15:59:05.770542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-10-01 15:59:05.770671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-10-01 15:59:05.770681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-10-01 15:59:05.780709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-10-01 15:59:05.780732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-10-01 15:59:05.780959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-10-01 15:59:05.780974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.867 [2024-10-01 15:59:05.780982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.867 [2024-10-01 15:59:05.781120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-10-01 15:59:05.781131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.867 [2024-10-01 15:59:05.781139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.867 [2024-10-01 15:59:05.781382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.867 [2024-10-01 15:59:05.781397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.867 [2024-10-01 15:59:05.781433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-10-01 15:59:05.781442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-10-01 15:59:05.781449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-10-01 15:59:05.781458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-10-01 15:59:05.781464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-10-01 15:59:05.781472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-10-01 15:59:05.781601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-10-01 15:59:05.781611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-10-01 15:59:05.791766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-10-01 15:59:05.791788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-10-01 15:59:05.791915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-10-01 15:59:05.791930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.867 [2024-10-01 15:59:05.791938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.867 [2024-10-01 15:59:05.792157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-10-01 15:59:05.792168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.867 [2024-10-01 15:59:05.792175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.867 [2024-10-01 15:59:05.792336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.867 [2024-10-01 15:59:05.792350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.867 [2024-10-01 15:59:05.792493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-10-01 15:59:05.792503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-10-01 15:59:05.792510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-10-01 15:59:05.792519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-10-01 15:59:05.792526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.867 [2024-10-01 15:59:05.792533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.867 [2024-10-01 15:59:05.792679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-10-01 15:59:05.792689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.867 [2024-10-01 15:59:05.803102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-10-01 15:59:05.803124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.867 [2024-10-01 15:59:05.803497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-10-01 15:59:05.803514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.867 [2024-10-01 15:59:05.803522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.867 [2024-10-01 15:59:05.803664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.867 [2024-10-01 15:59:05.803675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.867 [2024-10-01 15:59:05.803682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.867 [2024-10-01 15:59:05.803970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.867 [2024-10-01 15:59:05.803986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.867 [2024-10-01 15:59:05.804025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.867 [2024-10-01 15:59:05.804034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.868 [2024-10-01 15:59:05.804041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.868 [2024-10-01 15:59:05.804050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.868 [2024-10-01 15:59:05.804056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.868 [2024-10-01 15:59:05.804063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.868 [2024-10-01 15:59:05.804191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.868 [2024-10-01 15:59:05.804202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.868 [2024-10-01 15:59:05.814162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.868 [2024-10-01 15:59:05.814185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.868 [2024-10-01 15:59:05.814510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-10-01 15:59:05.814527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.868 [2024-10-01 15:59:05.814535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.868 [2024-10-01 15:59:05.814750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-10-01 15:59:05.814762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.868 [2024-10-01 15:59:05.814770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.868 [2024-10-01 15:59:05.815039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.868 [2024-10-01 15:59:05.815055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.868 [2024-10-01 15:59:05.815092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.868 [2024-10-01 15:59:05.815101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.868 [2024-10-01 15:59:05.815108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.868 [2024-10-01 15:59:05.815117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.868 [2024-10-01 15:59:05.815127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.868 [2024-10-01 15:59:05.815134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.868 [2024-10-01 15:59:05.815263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.868 [2024-10-01 15:59:05.815273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.868 [2024-10-01 15:59:05.826580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.868 [2024-10-01 15:59:05.826603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.868 [2024-10-01 15:59:05.826841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-10-01 15:59:05.826854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.868 [2024-10-01 15:59:05.826866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.868 [2024-10-01 15:59:05.826963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-10-01 15:59:05.826975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.868 [2024-10-01 15:59:05.826982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.868 [2024-10-01 15:59:05.826993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.868 [2024-10-01 15:59:05.827003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.868 [2024-10-01 15:59:05.827022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.868 [2024-10-01 15:59:05.827030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.868 [2024-10-01 15:59:05.827037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.868 [2024-10-01 15:59:05.827046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.868 [2024-10-01 15:59:05.827052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.868 [2024-10-01 15:59:05.827059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.868 [2024-10-01 15:59:05.827073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.868 [2024-10-01 15:59:05.827080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.868 [2024-10-01 15:59:05.837733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.868 [2024-10-01 15:59:05.837755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.868 [2024-10-01 15:59:05.837848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-10-01 15:59:05.837868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.868 [2024-10-01 15:59:05.837876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.868 [2024-10-01 15:59:05.838072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-10-01 15:59:05.838083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.868 [2024-10-01 15:59:05.838091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.868 [2024-10-01 15:59:05.838104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.868 [2024-10-01 15:59:05.838117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.868 [2024-10-01 15:59:05.838127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.868 [2024-10-01 15:59:05.838133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.868 [2024-10-01 15:59:05.838140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.868 [2024-10-01 15:59:05.838149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.868 [2024-10-01 15:59:05.838156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.868 [2024-10-01 15:59:05.838162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.868 [2024-10-01 15:59:05.838176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.868 [2024-10-01 15:59:05.838182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.868 [2024-10-01 15:59:05.849244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.868 [2024-10-01 15:59:05.849267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.868 [2024-10-01 15:59:05.849647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-10-01 15:59:05.849665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.868 [2024-10-01 15:59:05.849673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.868 [2024-10-01 15:59:05.849816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-10-01 15:59:05.849827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.868 [2024-10-01 15:59:05.849834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.868 [2024-10-01 15:59:05.849983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.868 [2024-10-01 15:59:05.849997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.868 [2024-10-01 15:59:05.850341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.868 [2024-10-01 15:59:05.850353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.868 [2024-10-01 15:59:05.850361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.868 [2024-10-01 15:59:05.850370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.868 [2024-10-01 15:59:05.850377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.868 [2024-10-01 15:59:05.850383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.868 [2024-10-01 15:59:05.850540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.868 [2024-10-01 15:59:05.850551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.868 [2024-10-01 15:59:05.859433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.868 [2024-10-01 15:59:05.859454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.868 [2024-10-01 15:59:05.859619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-10-01 15:59:05.859635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.868 [2024-10-01 15:59:05.859642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.868 [2024-10-01 15:59:05.859835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-10-01 15:59:05.859846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.868 [2024-10-01 15:59:05.859853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.868 [2024-10-01 15:59:05.859870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.868 [2024-10-01 15:59:05.859880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.868 [2024-10-01 15:59:05.859890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.868 [2024-10-01 15:59:05.859896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.868 [2024-10-01 15:59:05.859903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.868 [2024-10-01 15:59:05.859911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.868 [2024-10-01 15:59:05.859917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.868 [2024-10-01 15:59:05.859924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.868 [2024-10-01 15:59:05.859938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.868 [2024-10-01 15:59:05.859945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.868 [2024-10-01 15:59:05.870908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.868 [2024-10-01 15:59:05.870931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.868 [2024-10-01 15:59:05.871165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-10-01 15:59:05.871179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.868 [2024-10-01 15:59:05.871186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.868 [2024-10-01 15:59:05.871328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.868 [2024-10-01 15:59:05.871338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.868 [2024-10-01 15:59:05.871346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.868 [2024-10-01 15:59:05.871358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.868 [2024-10-01 15:59:05.871368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.868 [2024-10-01 15:59:05.871378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.868 [2024-10-01 15:59:05.871385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.868 [2024-10-01 15:59:05.871392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.868 [2024-10-01 15:59:05.871402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.868 [2024-10-01 15:59:05.871408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.868 [2024-10-01 15:59:05.871417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.868 [2024-10-01 15:59:05.871431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.868 [2024-10-01 15:59:05.871438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.868 [2024-10-01 15:59:05.883053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.868 [2024-10-01 15:59:05.883075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.869 [2024-10-01 15:59:05.883378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-10-01 15:59:05.883395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.869 [2024-10-01 15:59:05.883403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.869 [2024-10-01 15:59:05.883546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-10-01 15:59:05.883557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.869 [2024-10-01 15:59:05.883564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.869 [2024-10-01 15:59:05.883920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.869 [2024-10-01 15:59:05.883938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.869 [2024-10-01 15:59:05.884090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.869 [2024-10-01 15:59:05.884102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.869 [2024-10-01 15:59:05.884108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.869 [2024-10-01 15:59:05.884119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.869 [2024-10-01 15:59:05.884126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.869 [2024-10-01 15:59:05.884132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.869 [2024-10-01 15:59:05.884274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.869 [2024-10-01 15:59:05.884286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.869 [2024-10-01 15:59:05.894177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.869 [2024-10-01 15:59:05.894199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.869 [2024-10-01 15:59:05.894434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-10-01 15:59:05.894448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.869 [2024-10-01 15:59:05.894456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.869 [2024-10-01 15:59:05.894585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-10-01 15:59:05.894595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.869 [2024-10-01 15:59:05.894602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.869 [2024-10-01 15:59:05.894962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.869 [2024-10-01 15:59:05.894979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.869 [2024-10-01 15:59:05.895142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.869 [2024-10-01 15:59:05.895155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.869 [2024-10-01 15:59:05.895162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.869 [2024-10-01 15:59:05.895172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.869 [2024-10-01 15:59:05.895179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.869 [2024-10-01 15:59:05.895186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.869 [2024-10-01 15:59:05.895418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.869 [2024-10-01 15:59:05.895430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.869 [2024-10-01 15:59:05.905588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.869 [2024-10-01 15:59:05.905609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.869 [2024-10-01 15:59:05.905805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-10-01 15:59:05.905819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.869 [2024-10-01 15:59:05.905828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.869 [2024-10-01 15:59:05.905911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-10-01 15:59:05.905923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.869 [2024-10-01 15:59:05.905930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.869 [2024-10-01 15:59:05.905942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.869 [2024-10-01 15:59:05.905952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.869 [2024-10-01 15:59:05.905963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.869 [2024-10-01 15:59:05.905970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.869 [2024-10-01 15:59:05.905977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.869 [2024-10-01 15:59:05.905986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.869 [2024-10-01 15:59:05.905992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.869 [2024-10-01 15:59:05.905998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.869 [2024-10-01 15:59:05.906448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.869 [2024-10-01 15:59:05.906460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.869 [2024-10-01 15:59:05.916470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.869 [2024-10-01 15:59:05.916492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.869 [2024-10-01 15:59:05.916674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-10-01 15:59:05.916687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.869 [2024-10-01 15:59:05.916699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.869 [2024-10-01 15:59:05.916839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-10-01 15:59:05.916850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.869 [2024-10-01 15:59:05.916857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.869 [2024-10-01 15:59:05.916874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.869 [2024-10-01 15:59:05.916884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.869 [2024-10-01 15:59:05.916894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.869 [2024-10-01 15:59:05.916902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.869 [2024-10-01 15:59:05.916908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.869 [2024-10-01 15:59:05.916918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.869 [2024-10-01 15:59:05.916923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.869 [2024-10-01 15:59:05.916930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.869 [2024-10-01 15:59:05.916943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.869 [2024-10-01 15:59:05.916951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.869 [2024-10-01 15:59:05.928810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.869 [2024-10-01 15:59:05.928834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.869 [2024-10-01 15:59:05.929187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-10-01 15:59:05.929205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.869 [2024-10-01 15:59:05.929214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.869 [2024-10-01 15:59:05.929364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-10-01 15:59:05.929374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.869 [2024-10-01 15:59:05.929381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.869 [2024-10-01 15:59:05.929461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.869 [2024-10-01 15:59:05.929473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.869 [2024-10-01 15:59:05.929640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.869 [2024-10-01 15:59:05.929651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.869 [2024-10-01 15:59:05.929659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.869 [2024-10-01 15:59:05.929670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.869 [2024-10-01 15:59:05.929676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.869 [2024-10-01 15:59:05.929683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.869 [2024-10-01 15:59:05.930484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.869 [2024-10-01 15:59:05.930500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.869 [2024-10-01 15:59:05.939228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.869 [2024-10-01 15:59:05.939250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.869 [2024-10-01 15:59:05.939400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-10-01 15:59:05.939414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.869 [2024-10-01 15:59:05.939422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.869 [2024-10-01 15:59:05.939506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-10-01 15:59:05.939517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.869 [2024-10-01 15:59:05.939524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.869 [2024-10-01 15:59:05.939537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.869 [2024-10-01 15:59:05.939547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.869 [2024-10-01 15:59:05.939557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.869 [2024-10-01 15:59:05.939563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.869 [2024-10-01 15:59:05.939570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.869 [2024-10-01 15:59:05.939579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.869 [2024-10-01 15:59:05.939585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.869 [2024-10-01 15:59:05.939592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.869 [2024-10-01 15:59:05.939606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.869 [2024-10-01 15:59:05.939614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.869 [2024-10-01 15:59:05.950291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.869 [2024-10-01 15:59:05.950316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.869 [2024-10-01 15:59:05.950490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-10-01 15:59:05.950504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.869 [2024-10-01 15:59:05.950513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.869 [2024-10-01 15:59:05.950704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.869 [2024-10-01 15:59:05.950715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.869 [2024-10-01 15:59:05.950722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.869 [2024-10-01 15:59:05.950851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.869 [2024-10-01 15:59:05.950870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.870 [2024-10-01 15:59:05.951217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.870 [2024-10-01 15:59:05.951236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.870 [2024-10-01 15:59:05.951243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.870 [2024-10-01 15:59:05.951253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.870 [2024-10-01 15:59:05.951260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.870 [2024-10-01 15:59:05.951266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.870 [2024-10-01 15:59:05.951422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.870 [2024-10-01 15:59:05.951433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.870 [2024-10-01 15:59:05.961266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.870 [2024-10-01 15:59:05.961289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.870 [2024-10-01 15:59:05.961401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.870 [2024-10-01 15:59:05.961414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.870 [2024-10-01 15:59:05.961421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.870 [2024-10-01 15:59:05.961623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.870 [2024-10-01 15:59:05.961635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.870 [2024-10-01 15:59:05.961642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.870 [2024-10-01 15:59:05.962093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.870 [2024-10-01 15:59:05.962110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.870 [2024-10-01 15:59:05.962309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.870 [2024-10-01 15:59:05.962321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.870 [2024-10-01 15:59:05.962328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.870 [2024-10-01 15:59:05.962337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.870 [2024-10-01 15:59:05.962344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.870 [2024-10-01 15:59:05.962351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.870 [2024-10-01 15:59:05.962384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.870 [2024-10-01 15:59:05.962392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.870 [2024-10-01 15:59:05.972602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.870 [2024-10-01 15:59:05.972624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.870 [2024-10-01 15:59:05.972728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.870 [2024-10-01 15:59:05.972742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.870 [2024-10-01 15:59:05.972750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.870 [2024-10-01 15:59:05.972925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.870 [2024-10-01 15:59:05.972937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.870 [2024-10-01 15:59:05.972944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.870 [2024-10-01 15:59:05.972957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.870 [2024-10-01 15:59:05.972966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.870 [2024-10-01 15:59:05.972977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.870 [2024-10-01 15:59:05.972984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.870 [2024-10-01 15:59:05.972992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.870 [2024-10-01 15:59:05.973005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.870 [2024-10-01 15:59:05.973013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.870 [2024-10-01 15:59:05.973019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.870 [2024-10-01 15:59:05.973468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.870 [2024-10-01 15:59:05.973479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.870 [2024-10-01 15:59:05.983595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.870 [2024-10-01 15:59:05.983618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.870 [2024-10-01 15:59:05.983723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.870 [2024-10-01 15:59:05.983740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.870 [2024-10-01 15:59:05.983749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.870 [2024-10-01 15:59:05.983902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.870 [2024-10-01 15:59:05.983914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.870 [2024-10-01 15:59:05.983922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.870 [2024-10-01 15:59:05.983934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.870 [2024-10-01 15:59:05.983945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.870 [2024-10-01 15:59:05.983955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.870 [2024-10-01 15:59:05.983962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.870 [2024-10-01 15:59:05.983969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.870 [2024-10-01 15:59:05.983978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.870 [2024-10-01 15:59:05.983984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.870 [2024-10-01 15:59:05.983990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.870 [2024-10-01 15:59:05.984006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.870 [2024-10-01 15:59:05.984013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.870 [2024-10-01 15:59:05.995483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.870 [2024-10-01 15:59:05.995506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.870 [2024-10-01 15:59:05.995737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.870 [2024-10-01 15:59:05.995751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.870 [2024-10-01 15:59:05.995759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.870 [2024-10-01 15:59:05.995951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.870 [2024-10-01 15:59:05.995962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.870 [2024-10-01 15:59:05.995970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.870 [2024-10-01 15:59:05.996209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.870 [2024-10-01 15:59:05.996224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.870 [2024-10-01 15:59:05.996373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.870 [2024-10-01 15:59:05.996385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.870 [2024-10-01 15:59:05.996392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.870 [2024-10-01 15:59:05.996402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.870 [2024-10-01 15:59:05.996410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.870 [2024-10-01 15:59:05.996416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.870 [2024-10-01 15:59:05.996446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.870 [2024-10-01 15:59:05.996454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.870 11386.21 IOPS, 44.48 MiB/s [2024-10-01 15:59:06.008273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.870 [2024-10-01 15:59:06.008292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.870 [2024-10-01 15:59:06.008414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.870 [2024-10-01 15:59:06.008428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.870 [2024-10-01 15:59:06.008436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.870 [2024-10-01 15:59:06.008532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.870 [2024-10-01 15:59:06.008542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.870 [2024-10-01 15:59:06.008549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.870 [2024-10-01 15:59:06.009666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.870 [2024-10-01 15:59:06.009685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.870 [2024-10-01 15:59:06.010037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.870 [2024-10-01 15:59:06.010050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.870 [2024-10-01 15:59:06.010061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.870 [2024-10-01 15:59:06.010071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.870 [2024-10-01 15:59:06.010078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.870 [2024-10-01 15:59:06.010085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.870 [2024-10-01 15:59:06.010302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.870 [2024-10-01 15:59:06.010314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.870 [2024-10-01 15:59:06.018356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.870 [2024-10-01 15:59:06.018378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.870 [2024-10-01 15:59:06.018489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.870 [2024-10-01 15:59:06.018505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.870 [2024-10-01 15:59:06.018513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.870 [2024-10-01 15:59:06.018603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.870 [2024-10-01 15:59:06.018613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.870 [2024-10-01 15:59:06.018621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.870 [2024-10-01 15:59:06.018633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.870 [2024-10-01 15:59:06.018643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.870 [2024-10-01 15:59:06.018653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.870 [2024-10-01 15:59:06.018660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.870 [2024-10-01 15:59:06.018667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.870 [2024-10-01 15:59:06.018676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.870 [2024-10-01 15:59:06.018682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.870 [2024-10-01 15:59:06.018689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.870 [2024-10-01 15:59:06.018703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.870 [2024-10-01 15:59:06.018709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.870 [2024-10-01 15:59:06.030807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.871 [2024-10-01 15:59:06.030830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.871 [2024-10-01 15:59:06.031145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.871 [2024-10-01 15:59:06.031162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.871 [2024-10-01 15:59:06.031171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.871 [2024-10-01 15:59:06.031316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.871 [2024-10-01 15:59:06.031331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.871 [2024-10-01 15:59:06.031339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.871 [2024-10-01 15:59:06.031522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.871 [2024-10-01 15:59:06.031538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.871 [2024-10-01 15:59:06.031679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.871 [2024-10-01 15:59:06.031691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.871 [2024-10-01 15:59:06.031698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.871 [2024-10-01 15:59:06.031708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.871 [2024-10-01 15:59:06.031715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.871 [2024-10-01 15:59:06.031721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.871 [2024-10-01 15:59:06.031752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.871 [2024-10-01 15:59:06.031762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.871 [2024-10-01 15:59:06.041537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.871 [2024-10-01 15:59:06.041560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.871 [2024-10-01 15:59:06.041807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.871 [2024-10-01 15:59:06.041823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.871 [2024-10-01 15:59:06.041832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.871 [2024-10-01 15:59:06.041927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.871 [2024-10-01 15:59:06.041939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.871 [2024-10-01 15:59:06.041946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.871 [2024-10-01 15:59:06.042091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.871 [2024-10-01 15:59:06.042104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.871 [2024-10-01 15:59:06.042130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.871 [2024-10-01 15:59:06.042139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.871 [2024-10-01 15:59:06.042145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.871 [2024-10-01 15:59:06.042155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.871 [2024-10-01 15:59:06.042161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.871 [2024-10-01 15:59:06.042169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.871 [2024-10-01 15:59:06.042183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.871 [2024-10-01 15:59:06.042190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.871 [2024-10-01 15:59:06.053426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.871 [2024-10-01 15:59:06.053451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.871 [2024-10-01 15:59:06.053612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.871 [2024-10-01 15:59:06.053626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.871 [2024-10-01 15:59:06.053634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.871 [2024-10-01 15:59:06.053764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.871 [2024-10-01 15:59:06.053775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.871 [2024-10-01 15:59:06.053782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.871 [2024-10-01 15:59:06.053794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.871 [2024-10-01 15:59:06.053804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.871 [2024-10-01 15:59:06.053815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.871 [2024-10-01 15:59:06.053821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.871 [2024-10-01 15:59:06.053828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.871 [2024-10-01 15:59:06.053836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.871 [2024-10-01 15:59:06.053842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.871 [2024-10-01 15:59:06.053849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.871 [2024-10-01 15:59:06.053869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.871 [2024-10-01 15:59:06.053877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.871 [2024-10-01 15:59:06.065389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.871 [2024-10-01 15:59:06.065412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.871 [2024-10-01 15:59:06.065548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.871 [2024-10-01 15:59:06.065562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.871 [2024-10-01 15:59:06.065570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.871 [2024-10-01 15:59:06.065724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.871 [2024-10-01 15:59:06.065735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.871 [2024-10-01 15:59:06.065743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.871 [2024-10-01 15:59:06.066088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.871 [2024-10-01 15:59:06.066105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.871 [2024-10-01 15:59:06.066377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.871 [2024-10-01 15:59:06.066389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.871 [2024-10-01 15:59:06.066396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.871 [2024-10-01 15:59:06.066409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.871 [2024-10-01 15:59:06.066416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.871 [2024-10-01 15:59:06.066423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.871 [2024-10-01 15:59:06.066576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.871 [2024-10-01 15:59:06.066587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.871 [2024-10-01 15:59:06.076393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.871 [2024-10-01 15:59:06.076416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.871 [2024-10-01 15:59:06.076761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.871 [2024-10-01 15:59:06.076779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.871 [2024-10-01 15:59:06.076787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.871 [2024-10-01 15:59:06.076887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.871 [2024-10-01 15:59:06.076898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.871 [2024-10-01 15:59:06.076906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.871 [2024-10-01 15:59:06.077051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.871 [2024-10-01 15:59:06.077065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.871 [2024-10-01 15:59:06.077092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.871 [2024-10-01 15:59:06.077101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.871 [2024-10-01 15:59:06.077107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.871 [2024-10-01 15:59:06.077116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.871 [2024-10-01 15:59:06.077122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.871 [2024-10-01 15:59:06.077129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.871 [2024-10-01 15:59:06.077143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.871 [2024-10-01 15:59:06.077151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.871 [2024-10-01 15:59:06.087442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.871 [2024-10-01 15:59:06.087465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.871 [2024-10-01 15:59:06.087750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.871 [2024-10-01 15:59:06.087768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.871 [2024-10-01 15:59:06.087777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.871 [2024-10-01 15:59:06.087926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.871 [2024-10-01 15:59:06.087938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.871 [2024-10-01 15:59:06.087948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.871 [2024-10-01 15:59:06.088203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.871 [2024-10-01 15:59:06.088217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.871 [2024-10-01 15:59:06.088377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.871 [2024-10-01 15:59:06.088390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.871 [2024-10-01 15:59:06.088397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.871 [2024-10-01 15:59:06.088407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.871 [2024-10-01 15:59:06.088414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.871 [2024-10-01 15:59:06.088421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.872 [2024-10-01 15:59:06.088451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.872 [2024-10-01 15:59:06.088459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.872 [2024-10-01 15:59:06.098161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.872 [2024-10-01 15:59:06.098321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.872 [2024-10-01 15:59:06.098428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.872 [2024-10-01 15:59:06.098444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.872 [2024-10-01 15:59:06.098452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.872 [2024-10-01 15:59:06.098566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.872 [2024-10-01 15:59:06.098579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.872 [2024-10-01 15:59:06.098587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.872 [2024-10-01 15:59:06.098595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.872 [2024-10-01 15:59:06.098725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.872 [2024-10-01 15:59:06.098736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.872 [2024-10-01 15:59:06.098743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.872 [2024-10-01 15:59:06.098750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.872 [2024-10-01 15:59:06.098781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.872 [2024-10-01 15:59:06.098789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.872 [2024-10-01 15:59:06.098795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.872 [2024-10-01 15:59:06.098803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.872 [2024-10-01 15:59:06.098815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.872 [2024-10-01 15:59:06.109718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.872 [2024-10-01 15:59:06.109740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.872 [2024-10-01 15:59:06.109907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.872 [2024-10-01 15:59:06.109922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.872 [2024-10-01 15:59:06.109931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.872 [2024-10-01 15:59:06.110066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.872 [2024-10-01 15:59:06.110077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.872 [2024-10-01 15:59:06.110085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.872 [2024-10-01 15:59:06.110097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.872 [2024-10-01 15:59:06.110109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.872 [2024-10-01 15:59:06.110120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.872 [2024-10-01 15:59:06.110127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.872 [2024-10-01 15:59:06.110134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.872 [2024-10-01 15:59:06.110143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.872 [2024-10-01 15:59:06.110150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.872 [2024-10-01 15:59:06.110157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.872 [2024-10-01 15:59:06.110172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.872 [2024-10-01 15:59:06.110179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.872 [2024-10-01 15:59:06.121316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.872 [2024-10-01 15:59:06.121339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.872 [2024-10-01 15:59:06.121462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.872 [2024-10-01 15:59:06.121475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.872 [2024-10-01 15:59:06.121484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.872 [2024-10-01 15:59:06.121652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.872 [2024-10-01 15:59:06.121664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.872 [2024-10-01 15:59:06.121672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.872 [2024-10-01 15:59:06.121684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.872 [2024-10-01 15:59:06.121694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.872 [2024-10-01 15:59:06.121704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.872 [2024-10-01 15:59:06.121711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.872 [2024-10-01 15:59:06.121718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.872 [2024-10-01 15:59:06.121727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.872 [2024-10-01 15:59:06.121736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.872 [2024-10-01 15:59:06.121743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.872 [2024-10-01 15:59:06.121758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.872 [2024-10-01 15:59:06.121765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.872 [2024-10-01 15:59:06.133249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.872 [2024-10-01 15:59:06.133272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.872 [2024-10-01 15:59:06.133574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.872 [2024-10-01 15:59:06.133592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.872 [2024-10-01 15:59:06.133600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.872 [2024-10-01 15:59:06.133694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.872 [2024-10-01 15:59:06.133706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.872 [2024-10-01 15:59:06.133713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.872 [2024-10-01 15:59:06.133897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.872 [2024-10-01 15:59:06.133912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.872 [2024-10-01 15:59:06.133939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.872 [2024-10-01 15:59:06.133947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.872 [2024-10-01 15:59:06.133954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.872 [2024-10-01 15:59:06.133963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.872 [2024-10-01 15:59:06.133969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.872 [2024-10-01 15:59:06.133976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.872 [2024-10-01 15:59:06.133990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.872 [2024-10-01 15:59:06.133997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.872 [2024-10-01 15:59:06.143629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.872 [2024-10-01 15:59:06.143652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.872 [2024-10-01 15:59:06.143804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.872 [2024-10-01 15:59:06.143818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.872 [2024-10-01 15:59:06.143826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.872 [2024-10-01 15:59:06.143934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.872 [2024-10-01 15:59:06.143945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.872 [2024-10-01 15:59:06.143953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.872 [2024-10-01 15:59:06.143969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.872 [2024-10-01 15:59:06.143978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.872 [2024-10-01 15:59:06.143988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.872 [2024-10-01 15:59:06.143995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.872 [2024-10-01 15:59:06.144001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.872 [2024-10-01 15:59:06.144011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.872 [2024-10-01 15:59:06.144018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.872 [2024-10-01 15:59:06.144025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.872 [2024-10-01 15:59:06.144039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.872 [2024-10-01 15:59:06.144046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.872 [2024-10-01 15:59:06.155236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.872 [2024-10-01 15:59:06.155259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.872 [2024-10-01 15:59:06.155438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.872 [2024-10-01 15:59:06.155453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.872 [2024-10-01 15:59:06.155462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.872 [2024-10-01 15:59:06.155567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.872 [2024-10-01 15:59:06.155578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.872 [2024-10-01 15:59:06.155586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.872 [2024-10-01 15:59:06.155715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.872 [2024-10-01 15:59:06.155728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.872 [2024-10-01 15:59:06.155755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.872 [2024-10-01 15:59:06.155762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.872 [2024-10-01 15:59:06.155770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.872 [2024-10-01 15:59:06.155780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.872 [2024-10-01 15:59:06.155787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.872 [2024-10-01 15:59:06.155793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.872 [2024-10-01 15:59:06.155808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.872 [2024-10-01 15:59:06.155815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.872 [2024-10-01 15:59:06.166328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.872 [2024-10-01 15:59:06.166352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.872 [2024-10-01 15:59:06.166500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.872 [2024-10-01 15:59:06.166518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.872 [2024-10-01 15:59:06.166526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.872 [2024-10-01 15:59:06.166621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-10-01 15:59:06.166632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.873 [2024-10-01 15:59:06.166639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.873 [2024-10-01 15:59:06.166652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.873 [2024-10-01 15:59:06.166662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.873 [2024-10-01 15:59:06.166916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.873 [2024-10-01 15:59:06.166927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.873 [2024-10-01 15:59:06.166934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.873 [2024-10-01 15:59:06.166944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.873 [2024-10-01 15:59:06.166951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.873 [2024-10-01 15:59:06.166958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.873 [2024-10-01 15:59:06.167248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.873 [2024-10-01 15:59:06.167259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.873 [2024-10-01 15:59:06.177389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.873 [2024-10-01 15:59:06.177413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.873 [2024-10-01 15:59:06.177656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-10-01 15:59:06.177672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.873 [2024-10-01 15:59:06.177681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.873 [2024-10-01 15:59:06.177812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-10-01 15:59:06.177823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.873 [2024-10-01 15:59:06.177831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.873 [2024-10-01 15:59:06.177980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.873 [2024-10-01 15:59:06.177995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.873 [2024-10-01 15:59:06.178021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.873 [2024-10-01 15:59:06.178029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.873 [2024-10-01 15:59:06.178037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.873 [2024-10-01 15:59:06.178047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.873 [2024-10-01 15:59:06.178053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.873 [2024-10-01 15:59:06.178063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.873 [2024-10-01 15:59:06.178078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.873 [2024-10-01 15:59:06.178085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.873 [2024-10-01 15:59:06.188472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.873 [2024-10-01 15:59:06.188494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.873 [2024-10-01 15:59:06.188804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-10-01 15:59:06.188822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.873 [2024-10-01 15:59:06.188830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.873 [2024-10-01 15:59:06.188918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-10-01 15:59:06.188930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.873 [2024-10-01 15:59:06.188938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.873 [2024-10-01 15:59:06.189082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.873 [2024-10-01 15:59:06.189096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.873 [2024-10-01 15:59:06.189245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.873 [2024-10-01 15:59:06.189256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.873 [2024-10-01 15:59:06.189264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.873 [2024-10-01 15:59:06.189273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.873 [2024-10-01 15:59:06.189279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.873 [2024-10-01 15:59:06.189286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.873 [2024-10-01 15:59:06.189315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.873 [2024-10-01 15:59:06.189325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.873 [2024-10-01 15:59:06.199240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.873 [2024-10-01 15:59:06.199263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.873 [2024-10-01 15:59:06.199400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-10-01 15:59:06.199415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.873 [2024-10-01 15:59:06.199422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.873 [2024-10-01 15:59:06.199516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-10-01 15:59:06.199527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.873 [2024-10-01 15:59:06.199536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.873 [2024-10-01 15:59:06.199665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.873 [2024-10-01 15:59:06.199682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.873 [2024-10-01 15:59:06.199821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.873 [2024-10-01 15:59:06.199832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.873 [2024-10-01 15:59:06.199839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.873 [2024-10-01 15:59:06.199849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.873 [2024-10-01 15:59:06.199855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.873 [2024-10-01 15:59:06.199861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.873 [2024-10-01 15:59:06.199899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.873 [2024-10-01 15:59:06.199908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.873 [2024-10-01 15:59:06.210263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.873 [2024-10-01 15:59:06.210287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.873 [2024-10-01 15:59:06.210605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-10-01 15:59:06.210623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.873 [2024-10-01 15:59:06.210631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.873 [2024-10-01 15:59:06.210723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-10-01 15:59:06.210734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.873 [2024-10-01 15:59:06.210741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.873 [2024-10-01 15:59:06.210771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.873 [2024-10-01 15:59:06.210782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.873 [2024-10-01 15:59:06.210793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.873 [2024-10-01 15:59:06.210800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.873 [2024-10-01 15:59:06.210808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.873 [2024-10-01 15:59:06.210817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.873 [2024-10-01 15:59:06.210824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.873 [2024-10-01 15:59:06.210831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.873 [2024-10-01 15:59:06.210844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.873 [2024-10-01 15:59:06.210851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.873 [2024-10-01 15:59:06.221330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.873 [2024-10-01 15:59:06.221353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.873 [2024-10-01 15:59:06.221608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-10-01 15:59:06.221633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.873 [2024-10-01 15:59:06.221646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.873 [2024-10-01 15:59:06.221729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-10-01 15:59:06.221740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.873 [2024-10-01 15:59:06.221747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.873 [2024-10-01 15:59:06.221898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.873 [2024-10-01 15:59:06.221912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.873 [2024-10-01 15:59:06.222059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.873 [2024-10-01 15:59:06.222069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.873 [2024-10-01 15:59:06.222076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.873 [2024-10-01 15:59:06.222085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.873 [2024-10-01 15:59:06.222091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.873 [2024-10-01 15:59:06.222098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.873 [2024-10-01 15:59:06.222128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.873 [2024-10-01 15:59:06.222137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.873 [2024-10-01 15:59:06.232053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.873 [2024-10-01 15:59:06.232075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.873 [2024-10-01 15:59:06.232299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-10-01 15:59:06.232315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.873 [2024-10-01 15:59:06.232323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.873 [2024-10-01 15:59:06.232415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.873 [2024-10-01 15:59:06.232427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.873 [2024-10-01 15:59:06.232434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.873 [2024-10-01 15:59:06.232607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.873 [2024-10-01 15:59:06.232621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.873 [2024-10-01 15:59:06.232651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.873 [2024-10-01 15:59:06.232659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.873 [2024-10-01 15:59:06.232665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.873 [2024-10-01 15:59:06.232675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.874 [2024-10-01 15:59:06.232682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.874 [2024-10-01 15:59:06.232688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.874 [2024-10-01 15:59:06.232708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.874 [2024-10-01 15:59:06.232715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.874 [2024-10-01 15:59:06.242485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.874 [2024-10-01 15:59:06.242507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.874 [2024-10-01 15:59:06.242693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.874 [2024-10-01 15:59:06.242708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.874 [2024-10-01 15:59:06.242715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.874 [2024-10-01 15:59:06.242802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.874 [2024-10-01 15:59:06.242813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.874 [2024-10-01 15:59:06.242820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.874 [2024-10-01 15:59:06.242956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.874 [2024-10-01 15:59:06.242970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.874 [2024-10-01 15:59:06.243112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.874 [2024-10-01 15:59:06.243123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.874 [2024-10-01 15:59:06.243130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.874 [2024-10-01 15:59:06.243140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.874 [2024-10-01 15:59:06.243147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.874 [2024-10-01 15:59:06.243154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.874 [2024-10-01 15:59:06.243183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.874 [2024-10-01 15:59:06.243191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.874 [2024-10-01 15:59:06.253896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.874 [2024-10-01 15:59:06.253918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.874 [2024-10-01 15:59:06.254032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.874 [2024-10-01 15:59:06.254046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.874 [2024-10-01 15:59:06.254054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.874 [2024-10-01 15:59:06.254137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.874 [2024-10-01 15:59:06.254147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.874 [2024-10-01 15:59:06.254154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.874 [2024-10-01 15:59:06.254166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.874 [2024-10-01 15:59:06.254176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.874 [2024-10-01 15:59:06.254190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.874 [2024-10-01 15:59:06.254197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.874 [2024-10-01 15:59:06.254204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.874 [2024-10-01 15:59:06.254213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.874 [2024-10-01 15:59:06.254218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.874 [2024-10-01 15:59:06.254225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.874 [2024-10-01 15:59:06.254238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.874 [2024-10-01 15:59:06.254246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.874 [2024-10-01 15:59:06.264912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.874 [2024-10-01 15:59:06.264935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.874 [2024-10-01 15:59:06.265157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.874 [2024-10-01 15:59:06.265172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.874 [2024-10-01 15:59:06.265180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.874 [2024-10-01 15:59:06.265280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.874 [2024-10-01 15:59:06.265291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.874 [2024-10-01 15:59:06.265298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.874 [2024-10-01 15:59:06.265635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.874 [2024-10-01 15:59:06.265651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.874 [2024-10-01 15:59:06.265804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.874 [2024-10-01 15:59:06.265817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.874 [2024-10-01 15:59:06.265824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.874 [2024-10-01 15:59:06.265834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.874 [2024-10-01 15:59:06.265841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.874 [2024-10-01 15:59:06.265848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.874 [2024-10-01 15:59:06.265996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.874 [2024-10-01 15:59:06.266007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.874 [2024-10-01 15:59:06.275286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.874 [2024-10-01 15:59:06.275308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.874 [2024-10-01 15:59:06.275551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.874 [2024-10-01 15:59:06.275567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.874 [2024-10-01 15:59:06.275575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.874 [2024-10-01 15:59:06.275740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.874 [2024-10-01 15:59:06.275753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.874 [2024-10-01 15:59:06.275760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.874 [2024-10-01 15:59:06.275913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.874 [2024-10-01 15:59:06.275928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.874 [2024-10-01 15:59:06.275956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.874 [2024-10-01 15:59:06.275965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.874 [2024-10-01 15:59:06.275972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.874 [2024-10-01 15:59:06.275981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.874 [2024-10-01 15:59:06.275987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.874 [2024-10-01 15:59:06.275994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.874 [2024-10-01 15:59:06.276121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.874 [2024-10-01 15:59:06.276132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.874 [2024-10-01 15:59:06.285514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.874 [2024-10-01 15:59:06.285535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.874 [2024-10-01 15:59:06.285721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.874 [2024-10-01 15:59:06.285734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.874 [2024-10-01 15:59:06.285743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.874 [2024-10-01 15:59:06.285820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.874 [2024-10-01 15:59:06.285830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.874 [2024-10-01 15:59:06.285837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.874 [2024-10-01 15:59:06.285980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.874 [2024-10-01 15:59:06.285994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.874 [2024-10-01 15:59:06.286134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.874 [2024-10-01 15:59:06.286144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.874 [2024-10-01 15:59:06.286151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.874 [2024-10-01 15:59:06.286161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.874 [2024-10-01 15:59:06.286168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.874 [2024-10-01 15:59:06.286175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.874 [2024-10-01 15:59:06.286201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.874 [2024-10-01 15:59:06.286212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.874 [2024-10-01 15:59:06.296348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.874 [2024-10-01 15:59:06.296370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.874 [2024-10-01 15:59:06.296476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.874 [2024-10-01 15:59:06.296490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.874 [2024-10-01 15:59:06.296498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.874 [2024-10-01 15:59:06.296619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.874 [2024-10-01 15:59:06.296631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.874 [2024-10-01 15:59:06.296638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.874 [2024-10-01 15:59:06.296767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.874 [2024-10-01 15:59:06.296780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.874 [2024-10-01 15:59:06.296807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.874 [2024-10-01 15:59:06.296815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.874 [2024-10-01 15:59:06.296822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.874 [2024-10-01 15:59:06.296832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.874 [2024-10-01 15:59:06.296839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.874 [2024-10-01 15:59:06.296846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.874 [2024-10-01 15:59:06.296980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.874 [2024-10-01 15:59:06.296991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.874 [2024-10-01 15:59:06.306947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.874 [2024-10-01 15:59:06.306969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.874 [2024-10-01 15:59:06.307199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.874 [2024-10-01 15:59:06.307214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.874 [2024-10-01 15:59:06.307221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.875 [2024-10-01 15:59:06.307363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.875 [2024-10-01 15:59:06.307373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.875 [2024-10-01 15:59:06.307380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.875 [2024-10-01 15:59:06.307511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.875 [2024-10-01 15:59:06.307524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.875 [2024-10-01 15:59:06.307662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.875 [2024-10-01 15:59:06.307678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.875 [2024-10-01 15:59:06.307685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.875 [2024-10-01 15:59:06.307694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.875 [2024-10-01 15:59:06.307701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.875 [2024-10-01 15:59:06.307707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.875 [2024-10-01 15:59:06.307852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.875 [2024-10-01 15:59:06.307919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.875 [2024-10-01 15:59:06.318307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.875 [2024-10-01 15:59:06.318330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.875 [2024-10-01 15:59:06.318466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.875 [2024-10-01 15:59:06.318480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.875 [2024-10-01 15:59:06.318488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.875 [2024-10-01 15:59:06.318589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.875 [2024-10-01 15:59:06.318600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.875 [2024-10-01 15:59:06.318607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.875 [2024-10-01 15:59:06.318627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.875 [2024-10-01 15:59:06.318638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.875 [2024-10-01 15:59:06.318648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.875 [2024-10-01 15:59:06.318655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.875 [2024-10-01 15:59:06.318662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.875 [2024-10-01 15:59:06.318671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.875 [2024-10-01 15:59:06.318677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.875 [2024-10-01 15:59:06.318683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.875 [2024-10-01 15:59:06.318697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.875 [2024-10-01 15:59:06.318705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.875 [2024-10-01 15:59:06.329260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.875 [2024-10-01 15:59:06.329282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.875 [2024-10-01 15:59:06.329493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.875 [2024-10-01 15:59:06.329507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.875 [2024-10-01 15:59:06.329515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.875 [2024-10-01 15:59:06.329638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.875 [2024-10-01 15:59:06.329653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.875 [2024-10-01 15:59:06.329661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.875 [2024-10-01 15:59:06.329673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.875 [2024-10-01 15:59:06.329683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.875 [2024-10-01 15:59:06.329693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.875 [2024-10-01 15:59:06.329700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.875 [2024-10-01 15:59:06.329706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.875 [2024-10-01 15:59:06.329715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.875 [2024-10-01 15:59:06.329721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.875 [2024-10-01 15:59:06.329728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.875 [2024-10-01 15:59:06.329742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.875 [2024-10-01 15:59:06.329749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.875 [2024-10-01 15:59:06.340346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.875 [2024-10-01 15:59:06.340368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.875 [2024-10-01 15:59:06.340539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.875 [2024-10-01 15:59:06.340554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.875 [2024-10-01 15:59:06.340561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.875 [2024-10-01 15:59:06.340702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.875 [2024-10-01 15:59:06.340713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.875 [2024-10-01 15:59:06.340720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.875 [2024-10-01 15:59:06.340732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.875 [2024-10-01 15:59:06.340742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.875 [2024-10-01 15:59:06.340752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.875 [2024-10-01 15:59:06.340759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.875 [2024-10-01 15:59:06.340766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.875 [2024-10-01 15:59:06.340775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.875 [2024-10-01 15:59:06.340781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.875 [2024-10-01 15:59:06.340787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.875 [2024-10-01 15:59:06.340801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.875 [2024-10-01 15:59:06.340808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.875 [2024-10-01 15:59:06.351435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.875 [2024-10-01 15:59:06.351458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.875 [2024-10-01 15:59:06.351707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.875 [2024-10-01 15:59:06.351722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.875 [2024-10-01 15:59:06.351730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.875 [2024-10-01 15:59:06.351880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.875 [2024-10-01 15:59:06.351892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.875 [2024-10-01 15:59:06.351899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.875 [2024-10-01 15:59:06.352061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.875 [2024-10-01 15:59:06.352075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.875 [2024-10-01 15:59:06.352216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.875 [2024-10-01 15:59:06.352228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.875 [2024-10-01 15:59:06.352235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.875 [2024-10-01 15:59:06.352245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.875 [2024-10-01 15:59:06.352252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.875 [2024-10-01 15:59:06.352258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.875 [2024-10-01 15:59:06.352400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.875 [2024-10-01 15:59:06.352412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.875 [2024-10-01 15:59:06.361516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.875 [2024-10-01 15:59:06.361546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.875 [2024-10-01 15:59:06.361689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.875 [2024-10-01 15:59:06.361702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.875 [2024-10-01 15:59:06.361710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.875 [2024-10-01 15:59:06.361855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.875 [2024-10-01 15:59:06.361871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.875 [2024-10-01 15:59:06.361878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.875 [2024-10-01 15:59:06.361887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.875 [2024-10-01 15:59:06.361899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.875 [2024-10-01 15:59:06.361907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.875 [2024-10-01 15:59:06.361914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.875 [2024-10-01 15:59:06.361924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.875 [2024-10-01 15:59:06.361937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.875 [2024-10-01 15:59:06.361944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.876 [2024-10-01 15:59:06.361949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.876 [2024-10-01 15:59:06.361956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.876 [2024-10-01 15:59:06.361969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.876 [2024-10-01 15:59:06.373613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.876 [2024-10-01 15:59:06.373635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.876 [2024-10-01 15:59:06.373797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.876 [2024-10-01 15:59:06.373810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.876 [2024-10-01 15:59:06.373817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.876 [2024-10-01 15:59:06.374012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.876 [2024-10-01 15:59:06.374024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.876 [2024-10-01 15:59:06.374031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.876 [2024-10-01 15:59:06.374043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.876 [2024-10-01 15:59:06.374052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.876 [2024-10-01 15:59:06.374070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.876 [2024-10-01 15:59:06.374079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.876 [2024-10-01 15:59:06.374086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.876 [2024-10-01 15:59:06.374095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.876 [2024-10-01 15:59:06.374101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.876 [2024-10-01 15:59:06.374108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.876 [2024-10-01 15:59:06.374121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.876 [2024-10-01 15:59:06.374128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.876 [2024-10-01 15:59:06.385052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.876 [2024-10-01 15:59:06.385074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.876 [2024-10-01 15:59:06.385265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.876 [2024-10-01 15:59:06.385279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.876 [2024-10-01 15:59:06.385288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.876 [2024-10-01 15:59:06.385427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.876 [2024-10-01 15:59:06.385438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.876 [2024-10-01 15:59:06.385449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.876 [2024-10-01 15:59:06.385461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.876 [2024-10-01 15:59:06.385471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.876 [2024-10-01 15:59:06.385481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.876 [2024-10-01 15:59:06.385488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.876 [2024-10-01 15:59:06.385494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.876 [2024-10-01 15:59:06.385503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.876 [2024-10-01 15:59:06.385509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.876 [2024-10-01 15:59:06.385516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.876 [2024-10-01 15:59:06.385530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.876 [2024-10-01 15:59:06.385537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.876 [2024-10-01 15:59:06.396383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.876 [2024-10-01 15:59:06.396405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.876 [2024-10-01 15:59:06.396974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.876 [2024-10-01 15:59:06.396995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.876 [2024-10-01 15:59:06.397004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.876 [2024-10-01 15:59:06.397227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.876 [2024-10-01 15:59:06.397239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.876 [2024-10-01 15:59:06.397247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.876 [2024-10-01 15:59:06.397408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.876 [2024-10-01 15:59:06.397423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.876 [2024-10-01 15:59:06.397460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.876 [2024-10-01 15:59:06.397469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.876 [2024-10-01 15:59:06.397476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.876 [2024-10-01 15:59:06.397485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.876 [2024-10-01 15:59:06.397492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.876 [2024-10-01 15:59:06.397498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.876 [2024-10-01 15:59:06.397512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.876 [2024-10-01 15:59:06.397519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.876 [2024-10-01 15:59:06.407142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.876 [2024-10-01 15:59:06.407163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.876 [2024-10-01 15:59:06.407325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.876 [2024-10-01 15:59:06.407339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.876 [2024-10-01 15:59:06.407346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.876 [2024-10-01 15:59:06.407429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.876 [2024-10-01 15:59:06.407440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.876 [2024-10-01 15:59:06.407447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.876 [2024-10-01 15:59:06.407793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.876 [2024-10-01 15:59:06.407807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.876 [2024-10-01 15:59:06.407972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.876 [2024-10-01 15:59:06.407984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.876 [2024-10-01 15:59:06.407992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.876 [2024-10-01 15:59:06.408001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.876 [2024-10-01 15:59:06.408008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.876 [2024-10-01 15:59:06.408014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.876 [2024-10-01 15:59:06.408045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.876 [2024-10-01 15:59:06.408053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.876 [2024-10-01 15:59:06.417918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.876 [2024-10-01 15:59:06.417939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.876 [2024-10-01 15:59:06.418101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.876 [2024-10-01 15:59:06.418114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.876 [2024-10-01 15:59:06.418123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.876 [2024-10-01 15:59:06.418317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.876 [2024-10-01 15:59:06.418328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.876 [2024-10-01 15:59:06.418335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.876 [2024-10-01 15:59:06.418348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.876 [2024-10-01 15:59:06.418357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.876 [2024-10-01 15:59:06.418367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.876 [2024-10-01 15:59:06.418374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.876 [2024-10-01 15:59:06.418381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.876 [2024-10-01 15:59:06.418391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.876 [2024-10-01 15:59:06.418400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.876 [2024-10-01 15:59:06.418406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.876 [2024-10-01 15:59:06.418420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.876 [2024-10-01 15:59:06.418426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.876 [2024-10-01 15:59:06.429808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.876 [2024-10-01 15:59:06.429829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.876 [2024-10-01 15:59:06.430001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.876 [2024-10-01 15:59:06.430015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.876 [2024-10-01 15:59:06.430022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.876 [2024-10-01 15:59:06.430157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-10-01 15:59:06.430168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.877 [2024-10-01 15:59:06.430176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.877 [2024-10-01 15:59:06.430188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.877 [2024-10-01 15:59:06.430197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.877 [2024-10-01 15:59:06.430207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.877 [2024-10-01 15:59:06.430213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.877 [2024-10-01 15:59:06.430220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.877 [2024-10-01 15:59:06.430229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.877 [2024-10-01 15:59:06.430236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.877 [2024-10-01 15:59:06.430243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.877 [2024-10-01 15:59:06.430256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.877 [2024-10-01 15:59:06.430263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.877 [2024-10-01 15:59:06.442554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.877 [2024-10-01 15:59:06.442576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.877 [2024-10-01 15:59:06.442826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-10-01 15:59:06.442840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.877 [2024-10-01 15:59:06.442848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.877 [2024-10-01 15:59:06.443098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-10-01 15:59:06.443111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.877 [2024-10-01 15:59:06.443118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.877 [2024-10-01 15:59:06.443508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.877 [2024-10-01 15:59:06.443523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.877 [2024-10-01 15:59:06.443681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.877 [2024-10-01 15:59:06.443694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.877 [2024-10-01 15:59:06.443701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.877 [2024-10-01 15:59:06.443710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.877 [2024-10-01 15:59:06.443717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.877 [2024-10-01 15:59:06.443724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.877 [2024-10-01 15:59:06.443876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.877 [2024-10-01 15:59:06.443887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.877 [2024-10-01 15:59:06.453646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.877 [2024-10-01 15:59:06.453667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.877 [2024-10-01 15:59:06.453860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-10-01 15:59:06.453878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.877 [2024-10-01 15:59:06.453887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.877 [2024-10-01 15:59:06.453979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-10-01 15:59:06.453990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.877 [2024-10-01 15:59:06.453997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.877 [2024-10-01 15:59:06.454009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.877 [2024-10-01 15:59:06.454018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.877 [2024-10-01 15:59:06.454028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.877 [2024-10-01 15:59:06.454035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.877 [2024-10-01 15:59:06.454042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.877 [2024-10-01 15:59:06.454052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.877 [2024-10-01 15:59:06.454060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.877 [2024-10-01 15:59:06.454067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.877 [2024-10-01 15:59:06.454081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.877 [2024-10-01 15:59:06.454087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.877 [2024-10-01 15:59:06.466626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.877 [2024-10-01 15:59:06.466648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.877 [2024-10-01 15:59:06.467177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-10-01 15:59:06.467199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.877 [2024-10-01 15:59:06.467207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.877 [2024-10-01 15:59:06.467415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-10-01 15:59:06.467427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.877 [2024-10-01 15:59:06.467435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.877 [2024-10-01 15:59:06.467698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.877 [2024-10-01 15:59:06.467714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.877 [2024-10-01 15:59:06.467879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.877 [2024-10-01 15:59:06.467892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.877 [2024-10-01 15:59:06.467899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.877 [2024-10-01 15:59:06.467909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.877 [2024-10-01 15:59:06.467916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.877 [2024-10-01 15:59:06.467922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.877 [2024-10-01 15:59:06.467953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.877 [2024-10-01 15:59:06.467960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.877 [2024-10-01 15:59:06.477701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.877 [2024-10-01 15:59:06.477724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.877 [2024-10-01 15:59:06.477956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-10-01 15:59:06.477971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.877 [2024-10-01 15:59:06.477979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.877 [2024-10-01 15:59:06.478154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-10-01 15:59:06.478165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.877 [2024-10-01 15:59:06.478173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.877 [2024-10-01 15:59:06.478184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.877 [2024-10-01 15:59:06.478195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.877 [2024-10-01 15:59:06.478205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.877 [2024-10-01 15:59:06.478211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.877 [2024-10-01 15:59:06.478218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.877 [2024-10-01 15:59:06.478228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.877 [2024-10-01 15:59:06.478235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.877 [2024-10-01 15:59:06.478245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.877 [2024-10-01 15:59:06.478259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.877 [2024-10-01 15:59:06.478265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.877 [2024-10-01 15:59:06.488513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.877 [2024-10-01 15:59:06.488536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.877 [2024-10-01 15:59:06.488725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-10-01 15:59:06.488740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.877 [2024-10-01 15:59:06.488749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.877 [2024-10-01 15:59:06.488895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-10-01 15:59:06.488906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.877 [2024-10-01 15:59:06.488914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.877 [2024-10-01 15:59:06.489045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.877 [2024-10-01 15:59:06.489058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.877 [2024-10-01 15:59:06.489199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.877 [2024-10-01 15:59:06.489209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.877 [2024-10-01 15:59:06.489216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.877 [2024-10-01 15:59:06.489226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.877 [2024-10-01 15:59:06.489233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.877 [2024-10-01 15:59:06.489240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.877 [2024-10-01 15:59:06.489270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.877 [2024-10-01 15:59:06.489279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.877 [2024-10-01 15:59:06.500715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.877 [2024-10-01 15:59:06.500738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.877 [2024-10-01 15:59:06.500847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-10-01 15:59:06.500867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.877 [2024-10-01 15:59:06.500875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.877 [2024-10-01 15:59:06.501009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.877 [2024-10-01 15:59:06.501020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.877 [2024-10-01 15:59:06.501027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.877 [2024-10-01 15:59:06.501040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.877 [2024-10-01 15:59:06.501054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.877 [2024-10-01 15:59:06.501064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.877 [2024-10-01 15:59:06.501071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.877 [2024-10-01 15:59:06.501077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.878 [2024-10-01 15:59:06.501086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.878 [2024-10-01 15:59:06.501092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.878 [2024-10-01 15:59:06.501099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.878 [2024-10-01 15:59:06.501113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.878 [2024-10-01 15:59:06.501121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.878 [2024-10-01 15:59:06.510998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.878 [2024-10-01 15:59:06.511020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.878 [2024-10-01 15:59:06.511182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-10-01 15:59:06.511196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.878 [2024-10-01 15:59:06.511203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.878 [2024-10-01 15:59:06.511342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-10-01 15:59:06.511353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.878 [2024-10-01 15:59:06.511361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.878 [2024-10-01 15:59:06.511373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.878 [2024-10-01 15:59:06.511383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.878 [2024-10-01 15:59:06.511394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.878 [2024-10-01 15:59:06.511400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.878 [2024-10-01 15:59:06.511407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.878 [2024-10-01 15:59:06.511416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.878 [2024-10-01 15:59:06.511423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.878 [2024-10-01 15:59:06.511430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.878 [2024-10-01 15:59:06.511443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.878 [2024-10-01 15:59:06.511450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.878 [2024-10-01 15:59:06.522800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.878 [2024-10-01 15:59:06.522822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.878 [2024-10-01 15:59:06.523201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-10-01 15:59:06.523220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.878 [2024-10-01 15:59:06.523234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.878 [2024-10-01 15:59:06.523320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-10-01 15:59:06.523331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.878 [2024-10-01 15:59:06.523338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.878 [2024-10-01 15:59:06.523483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.878 [2024-10-01 15:59:06.523497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.878 [2024-10-01 15:59:06.523533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.878 [2024-10-01 15:59:06.523542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.878 [2024-10-01 15:59:06.523549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.878 [2024-10-01 15:59:06.523559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.878 [2024-10-01 15:59:06.523565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.878 [2024-10-01 15:59:06.523571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.878 [2024-10-01 15:59:06.523748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.878 [2024-10-01 15:59:06.523759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.878 [2024-10-01 15:59:06.534610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.878 [2024-10-01 15:59:06.534632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.878 [2024-10-01 15:59:06.534987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-10-01 15:59:06.535007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.878 [2024-10-01 15:59:06.535015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.878 [2024-10-01 15:59:06.535237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-10-01 15:59:06.535250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.878 [2024-10-01 15:59:06.535257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.878 [2024-10-01 15:59:06.535569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.878 [2024-10-01 15:59:06.535586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.878 [2024-10-01 15:59:06.535738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.878 [2024-10-01 15:59:06.535749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.878 [2024-10-01 15:59:06.535757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.878 [2024-10-01 15:59:06.535766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.878 [2024-10-01 15:59:06.535773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.878 [2024-10-01 15:59:06.535779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.878 [2024-10-01 15:59:06.535814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.878 [2024-10-01 15:59:06.535822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.878 [2024-10-01 15:59:06.545849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.878 [2024-10-01 15:59:06.545879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.878 [2024-10-01 15:59:06.546059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-10-01 15:59:06.546073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.878 [2024-10-01 15:59:06.546081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.878 [2024-10-01 15:59:06.546301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-10-01 15:59:06.546312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.878 [2024-10-01 15:59:06.546319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.878 [2024-10-01 15:59:06.546558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.878 [2024-10-01 15:59:06.546574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.878 [2024-10-01 15:59:06.546620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.878 [2024-10-01 15:59:06.546630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.878 [2024-10-01 15:59:06.546637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.878 [2024-10-01 15:59:06.546646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.878 [2024-10-01 15:59:06.546653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.878 [2024-10-01 15:59:06.546660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.878 [2024-10-01 15:59:06.546674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.878 [2024-10-01 15:59:06.546681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.878 [2024-10-01 15:59:06.556528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.878 [2024-10-01 15:59:06.556549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.878 [2024-10-01 15:59:06.556759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-10-01 15:59:06.556773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.878 [2024-10-01 15:59:06.556781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.878 [2024-10-01 15:59:06.556998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-10-01 15:59:06.557011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.878 [2024-10-01 15:59:06.557020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.878 [2024-10-01 15:59:06.557261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.878 [2024-10-01 15:59:06.557276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.878 [2024-10-01 15:59:06.557430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.878 [2024-10-01 15:59:06.557441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.878 [2024-10-01 15:59:06.557449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.878 [2024-10-01 15:59:06.557458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.878 [2024-10-01 15:59:06.557465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.878 [2024-10-01 15:59:06.557472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.878 [2024-10-01 15:59:06.557502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.878 [2024-10-01 15:59:06.557509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.878 [2024-10-01 15:59:06.567522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.878 [2024-10-01 15:59:06.567545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.878 [2024-10-01 15:59:06.567790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-10-01 15:59:06.567804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.878 [2024-10-01 15:59:06.567813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.878 [2024-10-01 15:59:06.567898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.878 [2024-10-01 15:59:06.567910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.878 [2024-10-01 15:59:06.567918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.878 [2024-10-01 15:59:06.568048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.878 [2024-10-01 15:59:06.568062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.878 [2024-10-01 15:59:06.568201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.878 [2024-10-01 15:59:06.568212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.878 [2024-10-01 15:59:06.568220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.878 [2024-10-01 15:59:06.568230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.878 [2024-10-01 15:59:06.568236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.878 [2024-10-01 15:59:06.568243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.879 [2024-10-01 15:59:06.568272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.879 [2024-10-01 15:59:06.568282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.879 [2024-10-01 15:59:06.578849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.879 [2024-10-01 15:59:06.578878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.879 [2024-10-01 15:59:06.579284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.879 [2024-10-01 15:59:06.579301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.879 [2024-10-01 15:59:06.579309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.879 [2024-10-01 15:59:06.579407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.879 [2024-10-01 15:59:06.579418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.879 [2024-10-01 15:59:06.579425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.879 [2024-10-01 15:59:06.579582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.879 [2024-10-01 15:59:06.579596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.879 [2024-10-01 15:59:06.579735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.879 [2024-10-01 15:59:06.579746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.879 [2024-10-01 15:59:06.579753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.879 [2024-10-01 15:59:06.579762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.879 [2024-10-01 15:59:06.579769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.879 [2024-10-01 15:59:06.579776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.879 [2024-10-01 15:59:06.579806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.879 [2024-10-01 15:59:06.579814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.879 [2024-10-01 15:59:06.590360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.879 [2024-10-01 15:59:06.590382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.879 [2024-10-01 15:59:06.590713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.879 [2024-10-01 15:59:06.590731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.879 [2024-10-01 15:59:06.590739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.879 [2024-10-01 15:59:06.590885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.879 [2024-10-01 15:59:06.590897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.879 [2024-10-01 15:59:06.590905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.879 [2024-10-01 15:59:06.591081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.879 [2024-10-01 15:59:06.591095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.879 [2024-10-01 15:59:06.591236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.879 [2024-10-01 15:59:06.591248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.879 [2024-10-01 15:59:06.591255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.879 [2024-10-01 15:59:06.591264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.879 [2024-10-01 15:59:06.591271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.879 [2024-10-01 15:59:06.591278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.879 [2024-10-01 15:59:06.591309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.879 [2024-10-01 15:59:06.591320] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.879 [2024-10-01 15:59:06.601875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.879 [2024-10-01 15:59:06.601897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.879 [2024-10-01 15:59:06.602273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.879 [2024-10-01 15:59:06.602290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.879 [2024-10-01 15:59:06.602298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.879 [2024-10-01 15:59:06.602490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.879 [2024-10-01 15:59:06.602502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.879 [2024-10-01 15:59:06.602509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.879 [2024-10-01 15:59:06.602802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.879 [2024-10-01 15:59:06.602818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.879 [2024-10-01 15:59:06.602859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.879 [2024-10-01 15:59:06.602873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.879 [2024-10-01 15:59:06.602880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.879 [2024-10-01 15:59:06.602890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.879 [2024-10-01 15:59:06.602896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.879 [2024-10-01 15:59:06.602903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.879 [2024-10-01 15:59:06.603033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.879 [2024-10-01 15:59:06.603043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.879 [2024-10-01 15:59:06.613397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.879 [2024-10-01 15:59:06.613419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.879 [2024-10-01 15:59:06.613769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.879 [2024-10-01 15:59:06.613787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.879 [2024-10-01 15:59:06.613795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.879 [2024-10-01 15:59:06.613989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.879 [2024-10-01 15:59:06.614000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.879 [2024-10-01 15:59:06.614008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.879 [2024-10-01 15:59:06.614295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.879 [2024-10-01 15:59:06.614310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.879 [2024-10-01 15:59:06.614462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.879 [2024-10-01 15:59:06.614478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.879 [2024-10-01 15:59:06.614486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.879 [2024-10-01 15:59:06.614496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.879 [2024-10-01 15:59:06.614503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.879 [2024-10-01 15:59:06.614509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.879 [2024-10-01 15:59:06.614541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.879 [2024-10-01 15:59:06.614549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.879 [2024-10-01 15:59:06.624834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.879 [2024-10-01 15:59:06.624856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.879 [2024-10-01 15:59:06.625228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.879 [2024-10-01 15:59:06.625247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.879 [2024-10-01 15:59:06.625255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.879 [2024-10-01 15:59:06.625396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.879 [2024-10-01 15:59:06.625407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.879 [2024-10-01 15:59:06.625415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.879 [2024-10-01 15:59:06.625559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.879 [2024-10-01 15:59:06.625573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.879 [2024-10-01 15:59:06.625712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.879 [2024-10-01 15:59:06.625722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.879 [2024-10-01 15:59:06.625730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.879 [2024-10-01 15:59:06.625741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.879 [2024-10-01 15:59:06.625747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.879 [2024-10-01 15:59:06.625754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.879 [2024-10-01 15:59:06.625784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.879 [2024-10-01 15:59:06.625793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.879 [2024-10-01 15:59:06.636029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.879 [2024-10-01 15:59:06.636051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.879 [2024-10-01 15:59:06.636212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.879 [2024-10-01 15:59:06.636226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.879 [2024-10-01 15:59:06.636234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.879 [2024-10-01 15:59:06.636429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.879 [2024-10-01 15:59:06.636444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.879 [2024-10-01 15:59:06.636451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.879 [2024-10-01 15:59:06.636464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.879 [2024-10-01 15:59:06.636473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.879 [2024-10-01 15:59:06.636483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.879 [2024-10-01 15:59:06.636489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.879 [2024-10-01 15:59:06.636496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.879 [2024-10-01 15:59:06.636505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.879 [2024-10-01 15:59:06.636512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.879 [2024-10-01 15:59:06.636518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.879 [2024-10-01 15:59:06.636532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.879 [2024-10-01 15:59:06.636539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.879 [2024-10-01 15:59:06.648703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.879 [2024-10-01 15:59:06.648725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.879 [2024-10-01 15:59:06.648962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.879 [2024-10-01 15:59:06.648977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.879 [2024-10-01 15:59:06.648984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.879 [2024-10-01 15:59:06.649216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.879 [2024-10-01 15:59:06.649229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.879 [2024-10-01 15:59:06.649236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.879 [2024-10-01 15:59:06.649534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.879 [2024-10-01 15:59:06.649549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.879 [2024-10-01 15:59:06.649795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.879 [2024-10-01 15:59:06.649807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.879 [2024-10-01 15:59:06.649815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.649824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.649831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.649837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.650200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.880 [2024-10-01 15:59:06.650214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.880 [2024-10-01 15:59:06.660552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.880 [2024-10-01 15:59:06.660574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.880 [2024-10-01 15:59:06.660674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.880 [2024-10-01 15:59:06.660686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.880 [2024-10-01 15:59:06.660695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.880 [2024-10-01 15:59:06.660776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.880 [2024-10-01 15:59:06.660786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.880 [2024-10-01 15:59:06.660793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.880 [2024-10-01 15:59:06.661145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.880 [2024-10-01 15:59:06.661160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.880 [2024-10-01 15:59:06.661322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.661334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.661341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.661351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.661359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.661365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.661397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.880 [2024-10-01 15:59:06.661404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.880 [2024-10-01 15:59:06.672074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.880 [2024-10-01 15:59:06.672096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.880 [2024-10-01 15:59:06.672315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.880 [2024-10-01 15:59:06.672329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.880 [2024-10-01 15:59:06.672336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.880 [2024-10-01 15:59:06.672486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.880 [2024-10-01 15:59:06.672497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.880 [2024-10-01 15:59:06.672504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.880 [2024-10-01 15:59:06.672956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.880 [2024-10-01 15:59:06.672972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.880 [2024-10-01 15:59:06.673170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.673183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.673194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.673205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.673212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.673219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.673364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.880 [2024-10-01 15:59:06.673375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.880 [2024-10-01 15:59:06.682709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.880 [2024-10-01 15:59:06.682730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.880 [2024-10-01 15:59:06.682968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.880 [2024-10-01 15:59:06.682983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.880 [2024-10-01 15:59:06.682991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.880 [2024-10-01 15:59:06.683077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.880 [2024-10-01 15:59:06.683088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.880 [2024-10-01 15:59:06.683095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.880 [2024-10-01 15:59:06.683107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.880 [2024-10-01 15:59:06.683117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.880 [2024-10-01 15:59:06.683128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.683135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.683142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.683150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.683157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.683163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.683177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.880 [2024-10-01 15:59:06.683185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.880 [2024-10-01 15:59:06.694872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.880 [2024-10-01 15:59:06.694894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.880 [2024-10-01 15:59:06.695091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.880 [2024-10-01 15:59:06.695105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.880 [2024-10-01 15:59:06.695112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.880 [2024-10-01 15:59:06.695241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.880 [2024-10-01 15:59:06.695252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.880 [2024-10-01 15:59:06.695262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.880 [2024-10-01 15:59:06.695274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.880 [2024-10-01 15:59:06.695283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.880 [2024-10-01 15:59:06.695294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.695301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.695308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.695317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.695323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.695329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.695351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.880 [2024-10-01 15:59:06.695359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.880 [2024-10-01 15:59:06.706124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.880 [2024-10-01 15:59:06.706146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.880 [2024-10-01 15:59:06.706313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.880 [2024-10-01 15:59:06.706327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.880 [2024-10-01 15:59:06.706334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.880 [2024-10-01 15:59:06.706484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.880 [2024-10-01 15:59:06.706495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.880 [2024-10-01 15:59:06.706502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.880 [2024-10-01 15:59:06.706514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.880 [2024-10-01 15:59:06.706523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.880 [2024-10-01 15:59:06.706534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.706540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.706548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.706557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.706564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.706570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.706584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.880 [2024-10-01 15:59:06.706591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.880 [2024-10-01 15:59:06.717366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.880 [2024-10-01 15:59:06.717392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.880 [2024-10-01 15:59:06.717854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.880 [2024-10-01 15:59:06.717880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.880 [2024-10-01 15:59:06.717888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.880 [2024-10-01 15:59:06.718109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.880 [2024-10-01 15:59:06.718121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.880 [2024-10-01 15:59:06.718129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.880 [2024-10-01 15:59:06.718599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.880 [2024-10-01 15:59:06.718616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.880 [2024-10-01 15:59:06.718779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.718791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.718798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.718808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.718816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.718822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.718973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.880 [2024-10-01 15:59:06.718984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.880 [2024-10-01 15:59:06.728558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.880 [2024-10-01 15:59:06.728580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.880 [2024-10-01 15:59:06.728822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.880 [2024-10-01 15:59:06.728836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.880 [2024-10-01 15:59:06.728844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.880 [2024-10-01 15:59:06.729038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.880 [2024-10-01 15:59:06.729049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.880 [2024-10-01 15:59:06.729058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.880 [2024-10-01 15:59:06.729511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.880 [2024-10-01 15:59:06.729527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.880 [2024-10-01 15:59:06.729695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.729706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.729714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.729727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.880 [2024-10-01 15:59:06.729734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.880 [2024-10-01 15:59:06.729741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.880 [2024-10-01 15:59:06.729893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.881 [2024-10-01 15:59:06.729904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.881 [2024-10-01 15:59:06.739290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.881 [2024-10-01 15:59:06.739312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.881 [2024-10-01 15:59:06.739553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.881 [2024-10-01 15:59:06.739567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.881 [2024-10-01 15:59:06.739574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.881 [2024-10-01 15:59:06.739794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.881 [2024-10-01 15:59:06.739806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.881 [2024-10-01 15:59:06.739813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.881 [2024-10-01 15:59:06.739826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.881 [2024-10-01 15:59:06.739836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.881 [2024-10-01 15:59:06.739846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.881 [2024-10-01 15:59:06.739852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.881 [2024-10-01 15:59:06.739859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.881 [2024-10-01 15:59:06.739873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.881 [2024-10-01 15:59:06.739879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.881 [2024-10-01 15:59:06.739885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.881 [2024-10-01 15:59:06.739907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.881 [2024-10-01 15:59:06.739914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.881 [2024-10-01 15:59:06.751972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.881 [2024-10-01 15:59:06.751995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.881 [2024-10-01 15:59:06.752206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.881 [2024-10-01 15:59:06.752220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.881 [2024-10-01 15:59:06.752228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.881 [2024-10-01 15:59:06.752372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.881 [2024-10-01 15:59:06.752383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.881 [2024-10-01 15:59:06.752390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.881 [2024-10-01 15:59:06.752412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.881 [2024-10-01 15:59:06.752422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.881 [2024-10-01 15:59:06.752432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.881 [2024-10-01 15:59:06.752438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.881 [2024-10-01 15:59:06.752445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.881 [2024-10-01 15:59:06.752453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.881 [2024-10-01 15:59:06.752460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.881 [2024-10-01 15:59:06.752466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.881 [2024-10-01 15:59:06.752479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.881 [2024-10-01 15:59:06.752485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.881 [2024-10-01 15:59:06.763441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.881 [2024-10-01 15:59:06.763463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.881 [2024-10-01 15:59:06.763581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.881 [2024-10-01 15:59:06.763595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.881 [2024-10-01 15:59:06.763602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.881 [2024-10-01 15:59:06.763735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.881 [2024-10-01 15:59:06.763746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.881 [2024-10-01 15:59:06.763753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.881 [2024-10-01 15:59:06.763765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.881 [2024-10-01 15:59:06.763774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.881 [2024-10-01 15:59:06.763786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.881 [2024-10-01 15:59:06.763793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.881 [2024-10-01 15:59:06.763800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.881 [2024-10-01 15:59:06.763809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.881 [2024-10-01 15:59:06.763815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.881 [2024-10-01 15:59:06.763821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.881 [2024-10-01 15:59:06.763834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.881 [2024-10-01 15:59:06.763841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.881 [2024-10-01 15:59:06.773821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.881 [2024-10-01 15:59:06.773843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.881 [2024-10-01 15:59:06.774015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.881 [2024-10-01 15:59:06.774029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.881 [2024-10-01 15:59:06.774037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.881 [2024-10-01 15:59:06.774241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.881 [2024-10-01 15:59:06.774252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.881 [2024-10-01 15:59:06.774259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.881 [2024-10-01 15:59:06.774271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.881 [2024-10-01 15:59:06.774280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.881 [2024-10-01 15:59:06.774292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.881 [2024-10-01 15:59:06.774299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.881 [2024-10-01 15:59:06.774306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.881 [2024-10-01 15:59:06.774315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.881 [2024-10-01 15:59:06.774321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.881 [2024-10-01 15:59:06.774328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.881 [2024-10-01 15:59:06.774342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.881 [2024-10-01 15:59:06.774350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.881 [2024-10-01 15:59:06.785913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.881 [2024-10-01 15:59:06.785936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.881 [2024-10-01 15:59:06.786276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.881 [2024-10-01 15:59:06.786295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.881 [2024-10-01 15:59:06.786304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.881 [2024-10-01 15:59:06.786451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.881 [2024-10-01 15:59:06.786462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.881 [2024-10-01 15:59:06.786469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.881 [2024-10-01 15:59:06.786721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.881 [2024-10-01 15:59:06.786736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.881 [2024-10-01 15:59:06.786901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.881 [2024-10-01 15:59:06.786914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.881 [2024-10-01 15:59:06.786921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.881 [2024-10-01 15:59:06.786931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.881 [2024-10-01 15:59:06.786937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.881 [2024-10-01 15:59:06.786948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.881 [2024-10-01 15:59:06.786978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.881 [2024-10-01 15:59:06.786986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.881 [2024-10-01 15:59:06.797017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.881 [2024-10-01 15:59:06.797038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.881 [2024-10-01 15:59:06.797311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.881 [2024-10-01 15:59:06.797327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.881 [2024-10-01 15:59:06.797335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.881 [2024-10-01 15:59:06.797527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.881 [2024-10-01 15:59:06.797538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.881 [2024-10-01 15:59:06.797545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.881 [2024-10-01 15:59:06.797558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.881 [2024-10-01 15:59:06.797568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.881 [2024-10-01 15:59:06.797578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.881 [2024-10-01 15:59:06.797584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.881 [2024-10-01 15:59:06.797591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.881 [2024-10-01 15:59:06.797600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.881 [2024-10-01 15:59:06.797606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.881 [2024-10-01 15:59:06.797613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.881 [2024-10-01 15:59:06.797628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.881 [2024-10-01 15:59:06.797635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.881 [2024-10-01 15:59:06.807341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.881 [2024-10-01 15:59:06.807363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.881 [2024-10-01 15:59:06.807602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.882 [2024-10-01 15:59:06.807616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.882 [2024-10-01 15:59:06.807624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.882 [2024-10-01 15:59:06.807719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.882 [2024-10-01 15:59:06.807729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.882 [2024-10-01 15:59:06.807736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.882 [2024-10-01 15:59:06.808122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.882 [2024-10-01 15:59:06.808146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.882 [2024-10-01 15:59:06.808305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.882 [2024-10-01 15:59:06.808317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.882 [2024-10-01 15:59:06.808324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.882 [2024-10-01 15:59:06.808334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.882 [2024-10-01 15:59:06.808342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.882 [2024-10-01 15:59:06.808348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.882 [2024-10-01 15:59:06.808379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.882 [2024-10-01 15:59:06.808387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.882 [2024-10-01 15:59:06.818484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.882 [2024-10-01 15:59:06.818507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.882 [2024-10-01 15:59:06.818736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.882 [2024-10-01 15:59:06.818750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.882 [2024-10-01 15:59:06.818758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.882 [2024-10-01 15:59:06.818977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.882 [2024-10-01 15:59:06.818990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.882 [2024-10-01 15:59:06.818997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.882 [2024-10-01 15:59:06.819010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.882 [2024-10-01 15:59:06.819020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.882 [2024-10-01 15:59:06.819031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.882 [2024-10-01 15:59:06.819037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.882 [2024-10-01 15:59:06.819044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.882 [2024-10-01 15:59:06.819052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.882 [2024-10-01 15:59:06.819059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.882 [2024-10-01 15:59:06.819066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.882 [2024-10-01 15:59:06.819080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.882 [2024-10-01 15:59:06.819088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.882 [2024-10-01 15:59:06.830482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.882 [2024-10-01 15:59:06.830506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.882 [2024-10-01 15:59:06.830744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.882 [2024-10-01 15:59:06.830758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.882 [2024-10-01 15:59:06.830770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.882 [2024-10-01 15:59:06.830916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.882 [2024-10-01 15:59:06.830927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.882 [2024-10-01 15:59:06.830934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.882 [2024-10-01 15:59:06.830947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.882 [2024-10-01 15:59:06.830956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.882 [2024-10-01 15:59:06.830974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.882 [2024-10-01 15:59:06.830982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.882 [2024-10-01 15:59:06.830989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.882 [2024-10-01 15:59:06.830998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.882 [2024-10-01 15:59:06.831004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.882 [2024-10-01 15:59:06.831010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.882 [2024-10-01 15:59:06.831024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.882 [2024-10-01 15:59:06.831031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.882 [2024-10-01 15:59:06.842527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.882 [2024-10-01 15:59:06.842550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.882 [2024-10-01 15:59:06.842674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.882 [2024-10-01 15:59:06.842688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.882 [2024-10-01 15:59:06.842696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.882 [2024-10-01 15:59:06.842914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.882 [2024-10-01 15:59:06.842926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.882 [2024-10-01 15:59:06.842933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.882 [2024-10-01 15:59:06.843117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.882 [2024-10-01 15:59:06.843131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.882 [2024-10-01 15:59:06.843159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.882 [2024-10-01 15:59:06.843168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.882 [2024-10-01 15:59:06.843175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.882 [2024-10-01 15:59:06.843185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.882 [2024-10-01 15:59:06.843191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.882 [2024-10-01 15:59:06.843201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.882 [2024-10-01 15:59:06.843330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.882 [2024-10-01 15:59:06.843340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.882 [2024-10-01 15:59:06.853977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.882 [2024-10-01 15:59:06.853999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.882 [2024-10-01 15:59:06.854356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.882 [2024-10-01 15:59:06.854374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.882 [2024-10-01 15:59:06.854382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.882 [2024-10-01 15:59:06.854492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.882 [2024-10-01 15:59:06.854504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.882 [2024-10-01 15:59:06.854511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.882 [2024-10-01 15:59:06.854686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.882 [2024-10-01 15:59:06.854700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.882 [2024-10-01 15:59:06.854841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.882 [2024-10-01 15:59:06.854851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.882 [2024-10-01 15:59:06.854859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.882 [2024-10-01 15:59:06.854875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.882 [2024-10-01 15:59:06.854883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.882 [2024-10-01 15:59:06.854890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.882 [2024-10-01 15:59:06.854921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.882 [2024-10-01 15:59:06.854929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.882 [2024-10-01 15:59:06.864667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.882 [2024-10-01 15:59:06.864690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.882 [2024-10-01 15:59:06.864805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.882 [2024-10-01 15:59:06.864818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.882 [2024-10-01 15:59:06.864827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.882 [2024-10-01 15:59:06.865019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.882 [2024-10-01 15:59:06.865030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.882 [2024-10-01 15:59:06.865037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.882 [2024-10-01 15:59:06.865049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.882 [2024-10-01 15:59:06.865060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.882 [2024-10-01 15:59:06.865073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.882 [2024-10-01 15:59:06.865080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.882 [2024-10-01 15:59:06.865087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.882 [2024-10-01 15:59:06.865096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.882 [2024-10-01 15:59:06.865102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.882 [2024-10-01 15:59:06.865109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.882 [2024-10-01 15:59:06.865124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.882 [2024-10-01 15:59:06.865131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.882 [2024-10-01 15:59:06.874749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.882 [2024-10-01 15:59:06.874779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.882 [2024-10-01 15:59:06.874897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.882 [2024-10-01 15:59:06.874911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.882 [2024-10-01 15:59:06.874919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.882 [2024-10-01 15:59:06.875090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.882 [2024-10-01 15:59:06.875101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.882 [2024-10-01 15:59:06.875110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.882 [2024-10-01 15:59:06.875119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.882 [2024-10-01 15:59:06.875131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.882 [2024-10-01 15:59:06.875139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.882 [2024-10-01 15:59:06.875146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.882 [2024-10-01 15:59:06.875153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.883 [2024-10-01 15:59:06.875166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.883 [2024-10-01 15:59:06.875174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.883 [2024-10-01 15:59:06.875180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.883 [2024-10-01 15:59:06.875187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.883 [2024-10-01 15:59:06.875199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.883 [2024-10-01 15:59:06.887287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.883 [2024-10-01 15:59:06.887308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.883 [2024-10-01 15:59:06.887521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.883 [2024-10-01 15:59:06.887534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.883 [2024-10-01 15:59:06.887543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.883 [2024-10-01 15:59:06.887692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.883 [2024-10-01 15:59:06.887703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.883 [2024-10-01 15:59:06.887711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.883 [2024-10-01 15:59:06.887722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.883 [2024-10-01 15:59:06.887731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.883 [2024-10-01 15:59:06.887741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.883 [2024-10-01 15:59:06.887748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.883 [2024-10-01 15:59:06.887756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.883 [2024-10-01 15:59:06.887764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.883 [2024-10-01 15:59:06.887770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.883 [2024-10-01 15:59:06.887777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.883 [2024-10-01 15:59:06.887790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.883 [2024-10-01 15:59:06.887796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.883 [2024-10-01 15:59:06.898731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.883 [2024-10-01 15:59:06.898753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.883 [2024-10-01 15:59:06.898989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.883 [2024-10-01 15:59:06.899004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.883 [2024-10-01 15:59:06.899012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.883 [2024-10-01 15:59:06.899269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.883 [2024-10-01 15:59:06.899282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.883 [2024-10-01 15:59:06.899289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.883 [2024-10-01 15:59:06.899301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.883 [2024-10-01 15:59:06.899311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.883 [2024-10-01 15:59:06.899321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.883 [2024-10-01 15:59:06.899327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.883 [2024-10-01 15:59:06.899335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.883 [2024-10-01 15:59:06.899343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.883 [2024-10-01 15:59:06.899349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.883 [2024-10-01 15:59:06.899356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.883 [2024-10-01 15:59:06.899371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.883 [2024-10-01 15:59:06.899381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.883 [2024-10-01 15:59:06.911318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.883 [2024-10-01 15:59:06.911340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.883 [2024-10-01 15:59:06.911577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.883 [2024-10-01 15:59:06.911592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.883 [2024-10-01 15:59:06.911600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.883 [2024-10-01 15:59:06.911759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.883 [2024-10-01 15:59:06.911770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.883 [2024-10-01 15:59:06.911777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.883 [2024-10-01 15:59:06.912061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.883 [2024-10-01 15:59:06.912077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.883 [2024-10-01 15:59:06.912439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.883 [2024-10-01 15:59:06.912453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.883 [2024-10-01 15:59:06.912460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.883 [2024-10-01 15:59:06.912470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.883 [2024-10-01 15:59:06.912477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.883 [2024-10-01 15:59:06.912484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.883 [2024-10-01 15:59:06.912642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.883 [2024-10-01 15:59:06.912652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.883 [2024-10-01 15:59:06.924483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.883 [2024-10-01 15:59:06.924506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.883 [2024-10-01 15:59:06.924657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.883 [2024-10-01 15:59:06.924671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.883 [2024-10-01 15:59:06.924679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.883 [2024-10-01 15:59:06.924802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.883 [2024-10-01 15:59:06.924812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.883 [2024-10-01 15:59:06.924819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.883 [2024-10-01 15:59:06.924831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.883 [2024-10-01 15:59:06.924841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.883 [2024-10-01 15:59:06.924860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.883 [2024-10-01 15:59:06.924877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.883 [2024-10-01 15:59:06.924884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.883 [2024-10-01 15:59:06.924893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.883 [2024-10-01 15:59:06.924899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.883 [2024-10-01 15:59:06.924905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.883 [2024-10-01 15:59:06.924920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.883 [2024-10-01 15:59:06.924928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.883 [2024-10-01 15:59:06.936781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.883 [2024-10-01 15:59:06.936803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.883 [2024-10-01 15:59:06.936969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.883 [2024-10-01 15:59:06.936984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.883 [2024-10-01 15:59:06.936991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.883 [2024-10-01 15:59:06.937211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.883 [2024-10-01 15:59:06.937221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.883 [2024-10-01 15:59:06.937229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.883 [2024-10-01 15:59:06.937720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.883 [2024-10-01 15:59:06.937736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.883 [2024-10-01 15:59:06.938384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.883 [2024-10-01 15:59:06.938399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.883 [2024-10-01 15:59:06.938407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.883 [2024-10-01 15:59:06.938416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.883 [2024-10-01 15:59:06.938422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.883 [2024-10-01 15:59:06.938429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.883 [2024-10-01 15:59:06.938771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.883 [2024-10-01 15:59:06.938783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.883 [2024-10-01 15:59:06.947840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.883 [2024-10-01 15:59:06.947867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.883 [2024-10-01 15:59:06.948109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.883 [2024-10-01 15:59:06.948123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.883 [2024-10-01 15:59:06.948131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.883 [2024-10-01 15:59:06.948327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.883 [2024-10-01 15:59:06.948344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.883 [2024-10-01 15:59:06.948351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.883 [2024-10-01 15:59:06.948363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.883 [2024-10-01 15:59:06.948373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.883 [2024-10-01 15:59:06.948624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.883 [2024-10-01 15:59:06.948636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.883 [2024-10-01 15:59:06.948643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.883 [2024-10-01 15:59:06.948653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.883 [2024-10-01 15:59:06.948660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.883 [2024-10-01 15:59:06.948666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.883 [2024-10-01 15:59:06.949445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.883 [2024-10-01 15:59:06.949460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 [2024-10-01 15:59:06.959122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:06.959145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:06.959392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:06.959405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.884 [2024-10-01 15:59:06.959415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.884 [2024-10-01 15:59:06.959658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:06.959670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.884 [2024-10-01 15:59:06.959677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.884 [2024-10-01 15:59:06.959689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.884 [2024-10-01 15:59:06.959700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.884 [2024-10-01 15:59:06.959710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.884 [2024-10-01 15:59:06.959716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.884 [2024-10-01 15:59:06.959723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.884 [2024-10-01 15:59:06.959731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.884 [2024-10-01 15:59:06.959737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.884 [2024-10-01 15:59:06.959744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.884 [2024-10-01 15:59:06.959758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 [2024-10-01 15:59:06.959765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 [2024-10-01 15:59:06.970437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:06.970459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:06.970618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:06.970632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.884 [2024-10-01 15:59:06.970639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.884 [2024-10-01 15:59:06.970837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:06.970848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.884 [2024-10-01 15:59:06.970855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.884 [2024-10-01 15:59:06.970871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.884 [2024-10-01 15:59:06.970881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.884 [2024-10-01 15:59:06.970891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.884 [2024-10-01 15:59:06.970898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.884 [2024-10-01 15:59:06.970904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.884 [2024-10-01 15:59:06.970913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.884 [2024-10-01 15:59:06.970919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.884 [2024-10-01 15:59:06.970927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.884 [2024-10-01 15:59:06.970941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 [2024-10-01 15:59:06.970948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 [2024-10-01 15:59:06.982501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:06.982524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:06.982779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:06.982795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.884 [2024-10-01 15:59:06.982803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.884 [2024-10-01 15:59:06.982898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:06.982909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.884 [2024-10-01 15:59:06.982916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.884 [2024-10-01 15:59:06.982928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.884 [2024-10-01 15:59:06.982938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.884 [2024-10-01 15:59:06.982957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.884 [2024-10-01 15:59:06.982964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.884 [2024-10-01 15:59:06.982974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.884 [2024-10-01 15:59:06.982984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.884 [2024-10-01 15:59:06.982990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.884 [2024-10-01 15:59:06.982997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.884 [2024-10-01 15:59:06.983011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 [2024-10-01 15:59:06.983018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 [2024-10-01 15:59:06.995355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:06.995377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:06.995538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:06.995553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.884 [2024-10-01 15:59:06.995560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.884 [2024-10-01 15:59:06.995778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:06.995788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.884 [2024-10-01 15:59:06.995796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.884 [2024-10-01 15:59:06.995808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.884 [2024-10-01 15:59:06.995818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.884 [2024-10-01 15:59:06.995828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.884 [2024-10-01 15:59:06.995834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.884 [2024-10-01 15:59:06.995841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.884 [2024-10-01 15:59:06.995850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.884 [2024-10-01 15:59:06.995856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.884 [2024-10-01 15:59:06.995868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.884 [2024-10-01 15:59:06.995882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 [2024-10-01 15:59:06.995889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 00:24:57.884 Latency(us) 00:24:57.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.884 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:57.884 Verification LBA range: start 0x0 length 0x4000 00:24:57.884 NVMe0n1 : 15.01 11380.60 44.46 0.00 0.00 11225.58 1997.29 16352.79 00:24:57.884 =================================================================================================================== 00:24:57.884 Total : 11380.60 44.46 0.00 0.00 11225.58 1997.29 16352.79 00:24:57.884 [2024-10-01 15:59:07.006636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:07.006659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:07.007423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:07.007440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.884 [2024-10-01 15:59:07.007448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.884 [2024-10-01 15:59:07.007686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:07.007697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.884 [2024-10-01 15:59:07.007704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.884 [2024-10-01 15:59:07.007715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.884 [2024-10-01 15:59:07.007725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.884 [2024-10-01 15:59:07.007734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.884 [2024-10-01 15:59:07.007740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.884 [2024-10-01 15:59:07.007747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.884 [2024-10-01 15:59:07.007755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.884 [2024-10-01 15:59:07.007761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.884 [2024-10-01 15:59:07.007768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.884 [2024-10-01 15:59:07.007778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 [2024-10-01 15:59:07.007785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 [2024-10-01 15:59:07.016710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:07.016731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:07.016950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:07.016963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.884 [2024-10-01 15:59:07.016970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.884 [2024-10-01 15:59:07.017209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:07.017221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.884 [2024-10-01 15:59:07.017228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.884 [2024-10-01 15:59:07.017236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.884 [2024-10-01 15:59:07.017247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.884 [2024-10-01 15:59:07.017255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.884 [2024-10-01 15:59:07.017261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.884 [2024-10-01 15:59:07.017268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.884 [2024-10-01 15:59:07.017277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 [2024-10-01 15:59:07.017287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.884 [2024-10-01 15:59:07.017293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.884 [2024-10-01 15:59:07.017300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.884 [2024-10-01 15:59:07.017309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 [2024-10-01 15:59:07.026761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:07.027006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:07.027021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.884 [2024-10-01 15:59:07.027029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.884 [2024-10-01 15:59:07.027044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.884 [2024-10-01 15:59:07.027054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:07.027067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.884 [2024-10-01 15:59:07.027073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.884 [2024-10-01 15:59:07.027080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.884 [2024-10-01 15:59:07.027088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 [2024-10-01 15:59:07.027334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:07.027346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.884 [2024-10-01 15:59:07.027353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.884 [2024-10-01 15:59:07.027362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.884 [2024-10-01 15:59:07.027370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.884 [2024-10-01 15:59:07.027377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.884 [2024-10-01 15:59:07.027385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.884 [2024-10-01 15:59:07.027393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.884 [2024-10-01 15:59:07.036809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.884 [2024-10-01 15:59:07.037055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.884 [2024-10-01 15:59:07.037071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.885 [2024-10-01 15:59:07.037079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.885 [2024-10-01 15:59:07.037089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.885 [2024-10-01 15:59:07.037101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.885 [2024-10-01 15:59:07.037108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.885 [2024-10-01 15:59:07.037114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.885 [2024-10-01 15:59:07.037130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.885 [2024-10-01 15:59:07.037139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.885 [2024-10-01 15:59:07.037288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.885 [2024-10-01 15:59:07.037300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.885 [2024-10-01 15:59:07.037308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.885 [2024-10-01 15:59:07.037316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.885 [2024-10-01 15:59:07.037325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.885 [2024-10-01 15:59:07.037331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.885 [2024-10-01 15:59:07.037339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.885 [2024-10-01 15:59:07.037349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.885 [2024-10-01 15:59:07.046859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.885 [2024-10-01 15:59:07.047103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.885 [2024-10-01 15:59:07.047116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.885 [2024-10-01 15:59:07.047124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.885 [2024-10-01 15:59:07.047134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.885 [2024-10-01 15:59:07.047144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.885 [2024-10-01 15:59:07.047150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.885 [2024-10-01 15:59:07.047158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.885 [2024-10-01 15:59:07.047166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.885 [2024-10-01 15:59:07.047182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.885 [2024-10-01 15:59:07.047402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.885 [2024-10-01 15:59:07.047414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.885 [2024-10-01 15:59:07.047422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.885 [2024-10-01 15:59:07.047432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.885 [2024-10-01 15:59:07.047441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.885 [2024-10-01 15:59:07.047447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.885 [2024-10-01 15:59:07.047453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.885 [2024-10-01 15:59:07.047462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.885 [2024-10-01 15:59:07.056910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.885 [2024-10-01 15:59:07.057087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.885 [2024-10-01 15:59:07.057100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.885 [2024-10-01 15:59:07.057111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.885 [2024-10-01 15:59:07.057122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.885 [2024-10-01 15:59:07.057131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.885 [2024-10-01 15:59:07.057138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.885 [2024-10-01 15:59:07.057145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.885 [2024-10-01 15:59:07.057154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.885 [2024-10-01 15:59:07.057225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.885 [2024-10-01 15:59:07.057453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.885 [2024-10-01 15:59:07.057465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.885 [2024-10-01 15:59:07.057473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.885 [2024-10-01 15:59:07.057482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.885 [2024-10-01 15:59:07.057491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.885 [2024-10-01 15:59:07.057497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.885 [2024-10-01 15:59:07.057504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.885 [2024-10-01 15:59:07.057512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.885 [2024-10-01 15:59:07.066957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.885 [2024-10-01 15:59:07.067186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.885 [2024-10-01 15:59:07.067199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.885 [2024-10-01 15:59:07.067206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.885 [2024-10-01 15:59:07.067216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.885 [2024-10-01 15:59:07.067226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.885 [2024-10-01 15:59:07.067232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.885 [2024-10-01 15:59:07.067239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.885 [2024-10-01 15:59:07.067248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.885 [2024-10-01 15:59:07.067269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.885 [2024-10-01 15:59:07.067489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.885 [2024-10-01 15:59:07.067500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.885 [2024-10-01 15:59:07.067507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.885 [2024-10-01 15:59:07.067517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.885 [2024-10-01 15:59:07.067526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.885 [2024-10-01 15:59:07.067535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.885 [2024-10-01 15:59:07.067542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.885 [2024-10-01 15:59:07.067551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.885 [2024-10-01 15:59:07.077005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.885 [2024-10-01 15:59:07.077233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.885 [2024-10-01 15:59:07.077245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x987b70 with addr=10.0.0.2, port=4421 00:24:57.885 [2024-10-01 15:59:07.077253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x987b70 is same with the state(6) to be set 00:24:57.885 [2024-10-01 15:59:07.077263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x987b70 (9): Bad file descriptor 00:24:57.885 [2024-10-01 15:59:07.077272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.885 [2024-10-01 15:59:07.077279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.885 [2024-10-01 15:59:07.077285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.885 [2024-10-01 15:59:07.077294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.885 [2024-10-01 15:59:07.077314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.885 [2024-10-01 15:59:07.077475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.885 [2024-10-01 15:59:07.077487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf070 with addr=10.0.0.2, port=4422 00:24:57.885 [2024-10-01 15:59:07.077494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf070 is same with the state(6) to be set 00:24:57.885 [2024-10-01 15:59:07.077503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf070 (9): Bad file descriptor 00:24:57.885 [2024-10-01 15:59:07.077512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.885 [2024-10-01 15:59:07.077519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.885 [2024-10-01 15:59:07.077526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.885 [2024-10-01 15:59:07.077534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.885 Received shutdown signal, test time was about 15.000000 seconds 00:24:57.885 00:24:57.885 Latency(us) 00:24:57.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.885 =================================================================================================================== 00:24:57.885 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.885 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:57.885 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # killprocess 2532431 00:24:57.885 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2532431 ']' 00:24:57.885 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2532431 00:24:57.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2532431) - No such process 00:24:57.885 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@977 -- # echo 'Process with pid 2532431 is not found' 00:24:57.885 Process with pid 2532431 is not found 00:24:57.885 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # nvmftestfini 00:24:57.885 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:57.885 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:57.885 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.885 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:57.885 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.885 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.885 rmmod nvme_tcp 00:24:57.886 rmmod nvme_fabrics 00:24:57.886 rmmod nvme_keyring 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 2532152 ']' 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 2532152 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2532152 ']' 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2532152 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2532152 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2532152' 00:24:57.886 killing process with pid 2532152 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2532152 00:24:57.886 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2532152 00:24:58.144 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:58.144 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:58.144 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:58.144 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:58.144 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:24:58.144 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:58.144 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:24:58.144 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:58.144 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:58.144 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.144 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.144 15:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # exit 1 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # trap - ERR 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # print_backtrace 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh' 'nvmf_failover' '--transport=tcp') 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # local args 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1157 -- # xtrace_disable 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.048 ========== Backtrace start: ========== 00:25:00.048 00:25:00.048 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_failover"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh"],["--transport=tcp"]) 00:25:00.048 ... 00:25:00.048 1120 timing_enter $test_name 00:25:00.048 1121 echo "************************************" 00:25:00.048 1122 echo "START TEST $test_name" 00:25:00.048 1123 echo "************************************" 00:25:00.048 1124 xtrace_restore 00:25:00.048 1125 time "$@" 00:25:00.048 1126 xtrace_disable 00:25:00.048 1127 echo "************************************" 00:25:00.048 1128 echo "END TEST $test_name" 00:25:00.048 1129 echo "************************************" 00:25:00.048 1130 timing_exit $test_name 00:25:00.048 ... 00:25:00.048 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh:25 -> main(["--transport=tcp"]) 00:25:00.048 ... 00:25:00.048 20 fi 00:25:00.048 21 00:25:00.048 22 run_test "nvmf_identify" $rootdir/test/nvmf/host/identify.sh "${TEST_ARGS[@]}" 00:25:00.048 23 run_test "nvmf_perf" $rootdir/test/nvmf/host/perf.sh "${TEST_ARGS[@]}" 00:25:00.048 24 run_test "nvmf_fio_host" $rootdir/test/nvmf/host/fio.sh "${TEST_ARGS[@]}" 00:25:00.048 => 25 run_test "nvmf_failover" $rootdir/test/nvmf/host/failover.sh "${TEST_ARGS[@]}" 00:25:00.048 26 run_test "nvmf_host_discovery" $rootdir/test/nvmf/host/discovery.sh "${TEST_ARGS[@]}" 00:25:00.048 27 run_test "nvmf_host_multipath_status" $rootdir/test/nvmf/host/multipath_status.sh "${TEST_ARGS[@]}" 00:25:00.048 28 run_test "nvmf_discovery_remove_ifc" $rootdir/test/nvmf/host/discovery_remove_ifc.sh "${TEST_ARGS[@]}" 00:25:00.048 29 run_test "nvmf_identify_kernel_target" "$rootdir/test/nvmf/host/identify_kernel_nvmf.sh" "${TEST_ARGS[@]}" 00:25:00.048 30 run_test "nvmf_auth_host" "$rootdir/test/nvmf/host/auth.sh" "${TEST_ARGS[@]}" 00:25:00.048 ... 00:25:00.048 00:25:00.048 ========== Backtrace end ========== 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1194 -- # return 0 00:25:00.048 00:25:00.048 real 0m28.121s 00:25:00.048 user 1m18.283s 00:25:00.048 sys 0m7.423s 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1 -- # exit 1 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # trap - ERR 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # print_backtrace 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh' 'nvmf_host' '--transport=tcp') 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # local args 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1157 -- # xtrace_disable 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.048 ========== Backtrace start: ========== 00:25:00.048 00:25:00.048 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_host"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh"],["--transport=tcp"]) 00:25:00.048 ... 00:25:00.048 1120 timing_enter $test_name 00:25:00.048 1121 echo "************************************" 00:25:00.048 1122 echo "START TEST $test_name" 00:25:00.048 1123 echo "************************************" 00:25:00.048 1124 xtrace_restore 00:25:00.048 1125 time "$@" 00:25:00.048 1126 xtrace_disable 00:25:00.048 1127 echo "************************************" 00:25:00.048 1128 echo "END TEST $test_name" 00:25:00.048 1129 echo "************************************" 00:25:00.048 1130 timing_exit $test_name 00:25:00.048 ... 00:25:00.048 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh:16 -> main(["--transport=tcp"]) 00:25:00.048 ... 00:25:00.048 11 exit 0 00:25:00.048 12 fi 00:25:00.048 13 00:25:00.048 14 run_test "nvmf_target_core" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:00.048 15 run_test "nvmf_target_extra" $rootdir/test/nvmf/nvmf_target_extra.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:00.048 => 16 run_test "nvmf_host" $rootdir/test/nvmf/nvmf_host.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:00.048 17 00:25:00.048 18 # Interrupt mode for now is supported only on the target, with the TCP transport and posix or ssl socket implementations. 00:25:00.048 19 if [[ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" && $SPDK_TEST_URING -eq 0 ]]; then 00:25:00.048 20 run_test "nvmf_target_core_interrupt_mode" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:25:00.048 21 run_test "nvmf_interrupt" $rootdir/test/nvmf/target/interrupt.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:25:00.048 ... 00:25:00.048 00:25:00.048 ========== Backtrace end ========== 00:25:00.048 15:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1194 -- # return 0 00:25:00.048 00:25:00.048 real 1m52.712s 00:25:00.048 user 3m50.312s 00:25:00.048 sys 0m42.569s 00:25:00.048 15:59:09 nvmf_tcp -- common/autotest_common.sh@1125 -- # trap - ERR 00:25:00.048 15:59:09 nvmf_tcp -- common/autotest_common.sh@1125 -- # print_backtrace 00:25:00.048 15:59:09 nvmf_tcp -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:25:00.048 15:59:09 nvmf_tcp -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh' 'nvmf_tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf') 00:25:00.048 15:59:09 nvmf_tcp -- common/autotest_common.sh@1155 -- # local args 00:25:00.048 15:59:09 nvmf_tcp -- common/autotest_common.sh@1157 -- # xtrace_disable 00:25:00.048 15:59:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:00.048 ========== Backtrace start: ========== 00:25:00.048 00:25:00.048 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_tcp"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh"],["--transport=tcp"]) 00:25:00.048 ... 00:25:00.048 1120 timing_enter $test_name 00:25:00.048 1121 echo "************************************" 00:25:00.048 1122 echo "START TEST $test_name" 00:25:00.048 1123 echo "************************************" 00:25:00.048 1124 xtrace_restore 00:25:00.048 1125 time "$@" 00:25:00.048 1126 xtrace_disable 00:25:00.048 1127 echo "************************************" 00:25:00.048 1128 echo "END TEST $test_name" 00:25:00.048 1129 echo "************************************" 00:25:00.048 1130 timing_exit $test_name 00:25:00.048 ... 00:25:00.048 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh:280 -> main(["/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf"]) 00:25:00.048 ... 00:25:00.049 275 # list of all tests can properly differentiate them. Please do not merge them into one line. 00:25:00.049 276 if [ "$SPDK_TEST_NVMF_TRANSPORT" = "rdma" ]; then 00:25:00.049 277 run_test "nvmf_rdma" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:00.049 278 run_test "spdkcli_nvmf_rdma" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:00.049 279 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" ]; then 00:25:00.049 => 280 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:00.049 281 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:25:00.049 282 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:00.049 283 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:00.049 284 fi 00:25:00.049 285 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:25:00.049 ... 00:25:00.049 00:25:00.049 ========== Backtrace end ========== 00:25:00.049 15:59:09 nvmf_tcp -- common/autotest_common.sh@1194 -- # return 0 00:25:00.049 00:25:00.049 real 18m44.949s 00:25:00.049 user 40m58.438s 00:25:00.049 sys 6m2.987s 00:25:00.049 15:59:09 nvmf_tcp -- common/autotest_common.sh@1 -- # autotest_cleanup 00:25:00.049 15:59:09 nvmf_tcp -- common/autotest_common.sh@1392 -- # local autotest_es=1 00:25:00.049 15:59:09 nvmf_tcp -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:00.049 15:59:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.934 INFO: APP EXITING 00:25:14.934 INFO: killing all VMs 00:25:14.934 INFO: killing vhost app 00:25:14.934 INFO: EXIT DONE 00:25:17.472 Waiting for block devices as requested 00:25:17.731 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:17.731 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:17.731 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:17.990 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:17.990 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:17.990 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:17.990 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:18.249 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:18.249 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:18.249 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:18.507 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:18.507 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:18.508 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:18.767 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:18.767 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:18.767 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:18.767 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:22.061 Cleaning 00:25:22.061 Removing: /var/run/dpdk/spdk0/config 00:25:22.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:22.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:22.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:22.062 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:22.062 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:25:22.062 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:25:22.062 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:25:22.062 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:25:22.062 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:22.062 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:22.062 Removing: /var/run/dpdk/spdk1/config 00:25:22.062 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:22.062 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:22.062 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:22.062 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:22.062 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:25:22.062 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:25:22.062 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:25:22.062 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:25:22.062 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:22.062 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:22.062 Removing: /var/run/dpdk/spdk1/mp_socket 00:25:22.062 Removing: /var/run/dpdk/spdk2/config 00:25:22.062 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:22.062 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:22.062 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:22.062 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:22.062 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:25:22.062 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:25:22.062 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:25:22.062 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:25:22.062 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:22.062 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:22.062 Removing: /var/run/dpdk/spdk3/config 00:25:22.062 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:22.062 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:22.062 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:22.062 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:22.062 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:25:22.062 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:25:22.062 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:25:22.062 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:25:22.062 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:22.062 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:22.062 Removing: /var/run/dpdk/spdk4/config 00:25:22.062 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:22.062 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:22.062 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:22.062 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:22.062 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:25:22.062 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:25:22.062 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:25:22.062 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:25:22.062 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:22.062 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:22.062 Removing: /dev/shm/bdev_svc_trace.1 00:25:22.062 Removing: /dev/shm/nvmf_trace.0 00:25:22.062 Removing: /dev/shm/spdk_tgt_trace.pid2236391 00:25:22.062 Removing: /var/run/dpdk/spdk0 00:25:22.062 Removing: /var/run/dpdk/spdk1 00:25:22.062 Removing: /var/run/dpdk/spdk2 00:25:22.062 Removing: /var/run/dpdk/spdk3 00:25:22.062 Removing: /var/run/dpdk/spdk4 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2234016 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2235160 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2236391 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2237042 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2238113 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2238357 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2239587 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2239936 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2240299 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2242028 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2243334 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2243617 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2244116 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2244443 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2244745 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2245001 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2245249 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2245532 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2246501 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2249600 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2249995 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2250267 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2250492 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2250924 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2251001 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2251498 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2251719 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2251989 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2252157 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2252281 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2252492 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2253054 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2253311 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2253607 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2257544 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2261972 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2272182 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2272823 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2277330 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2277580 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2282196 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2288480 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2291303 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2301977 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2311157 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2312986 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2313915 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2331143 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2335599 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2381273 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2387106 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2392997 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2399244 00:25:22.062 Removing: /var/run/dpdk/spdk_pid2399247 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2400118 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2400858 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2401777 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2402349 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2402468 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2402696 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2402710 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2402719 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2403628 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2404539 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2405457 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2405926 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2405932 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2406218 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2407560 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2408620 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2416934 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2445942 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2450567 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2452184 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2454058 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2454343 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2454639 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2455012 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2455744 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2458048 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2459149 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2459780 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2461991 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2462609 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2463337 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2467599 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2473225 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2473226 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2473227 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2477233 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2485582 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2489621 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2495790 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2497135 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2498689 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2500201 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2505286 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2509533 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2517145 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2517147 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2521859 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2522086 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2522321 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2522775 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2522780 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2527489 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2528056 00:25:22.322 Removing: /var/run/dpdk/spdk_pid2532431 00:25:22.322 Clean 00:28:13.884 16:02:11 nvmf_tcp -- common/autotest_common.sh@1451 -- # return 1 00:28:13.884 16:02:11 nvmf_tcp -- common/autotest_common.sh@1 -- # : 00:28:13.884 16:02:11 nvmf_tcp -- common/autotest_common.sh@1 -- # exit 1 00:28:13.897 [Pipeline] } 00:28:13.916 [Pipeline] // stage 00:28:13.923 [Pipeline] } 00:28:13.943 [Pipeline] // timeout 00:28:13.952 [Pipeline] } 00:28:13.956 ERROR: script returned exit code 1 00:28:13.956 Setting overall build result to FAILURE 00:28:13.971 [Pipeline] // catchError 00:28:13.977 [Pipeline] } 00:28:13.996 [Pipeline] // wrap 00:28:14.004 [Pipeline] } 00:28:14.019 [Pipeline] // catchError 00:28:14.030 [Pipeline] stage 00:28:14.033 [Pipeline] { (Epilogue) 00:28:14.048 [Pipeline] catchError 00:28:14.050 [Pipeline] { 00:28:14.066 [Pipeline] echo 00:28:14.068 Cleanup processes 00:28:14.076 [Pipeline] sh 00:28:14.363 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:14.363 2571467 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:14.378 [Pipeline] sh 00:28:14.664 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:14.664 ++ grep -v 'sudo pgrep' 00:28:14.664 ++ awk '{print $1}' 00:28:14.664 + sudo kill -9 00:28:14.664 + true 00:28:14.677 [Pipeline] sh 00:28:14.961 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:21.540 [Pipeline] sh 00:28:21.827 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:21.827 Artifacts sizes are good 00:28:21.841 [Pipeline] archiveArtifacts 00:28:21.849 Archiving artifacts 00:28:22.107 [Pipeline] sh 00:28:22.394 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:22.410 [Pipeline] cleanWs 00:28:22.421 [WS-CLEANUP] Deleting project workspace... 00:28:22.421 [WS-CLEANUP] Deferred wipeout is used... 00:28:22.428 [WS-CLEANUP] done 00:28:22.430 [Pipeline] } 00:28:22.446 [Pipeline] // catchError 00:28:22.456 [Pipeline] echo 00:28:22.458 Tests finished with errors. Please check the logs for more info. 00:28:22.463 [Pipeline] echo 00:28:22.466 Execution node will be rebooted. 00:28:22.484 [Pipeline] build 00:28:22.487 Scheduling project: reset-job 00:28:22.503 [Pipeline] sh 00:28:22.802 + logger -p user.info -t JENKINS-CI 00:28:22.811 [Pipeline] } 00:28:22.825 [Pipeline] // stage 00:28:22.831 [Pipeline] } 00:28:22.845 [Pipeline] // node 00:28:22.851 [Pipeline] End of Pipeline 00:28:22.894 Finished: FAILURE